Search Results

Search found 16628 results on 666 pages for 'setup kit'.

Page 543/666 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Diagnosing Random Network Lag

    - by uesp
    I'm having trouble diagnosing some random lag on a 6 server LAMP cluster serving a MediaWiki site. While we're serving some 100 pages/sec the servers themselves are running fine with less than 0.5 load, no locked processes, no paging, no errors being logged, etc.... Lag is present on all servers and is random: one minute its fine the next it's there. DNS lookups on the servers are randomly slow. For example time nslookup google.com varies randomly from a few milliseconds to several seconds and sometimes times out entirely. While we use IP addresses internally on the cluster this may be a symptom of the root issue. We are not running our own DNS server. The Apache server-status pages randomly lag or time out. Benchmarking using ab between servers shows a few loads sometimes take 3000 ms (almost exactly). Benchmarking server-status on the local server itself usually shows no issue (it showed a lag only once among a few hundred tests). The servers are sitting behind a switch and a firewall which I don't have any access to so I don't know their setup or status. While we are under heavier than normal load a 2 Mbps incoming and 20 Mbps outgoing traffic shouldn't be stressing the switch or firewall should it? My feeling is that it is the switch/firewall or something above them in the ISP like their DNS but can't confirm it. I need some other tests or methods of diagnosing this lag to try and narrow down the ultimate cause.

    Read the article

  • RESOLVED Why does IPtables's NAT stop working when I enable the firewall's third interface?

    - by Kronick
    On my firewall I've three interfaces : eth0 : public IP (46.X.X.X.) eth0:0 public IP (46.X.X.Y.) eth1 : public IP (88.X.X.X.) eth2 : private LAN (172.X.X.X) I've setup a basic NAT which works great until I turn on the eth1 interface, I basically loose the connectivity. When I turn off the interface (ifconfig eth1 down) then the NAT re-work. I've added some policy routing via iproute, which makes my three public IP's available. I don't understand why turning on eth1 on makes the LAN unavailable. PS : weirder ; when I turn on eth1 BUT remove the NAT, then the firewall is accessible by using the public IPS. So to me it's exclusively a NAT issue, since without the NAT the network works while with the NAT without the second public interface, the NAT does work. Regards EDIT : I've been able to make it work by using iproute2 rules. That was definitely a routing issue. Here is what I did : ip rule add prio 50 table main ip rule add prio 201 from ip1/netmask table 201 ip rule add prio 202 from ip2/netmask table 202 ip route add default via gateway1 dev interface1 src ip1 proto static table 201 ip route append prohibit default table 201 metric 1 proto static ip route add default via gateway2 dev interface2 src ip2 proto static table 202 ip route append prohibit default table 202 metric 1 proto static # mutipath ip rule add prio 221 table 221 ip route add default table 221 proto static \ nexthop via gateway1 dev interface1 weight 2\ nexthop via gateway2 dev interface2 weight 3

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • How to troubleshoot Linksys E4200 Remote Management

    - by Jordan
    My Linksys E4200 is configured for Remote Management, but the router is not accepting the connections. Here's the configuration under Administration Management Remote Management Access: Remote Management: Enabled Access via: HTTP Remote Upgrade: Disabled Allowed Remote IP Address: Any IP Address Remote Management Port: 8080 The router is setup to use 192.168.10.41 as its static Internet IP address, and 192.168.35.1 as its LAN IP address. I can access the router just fine via its LAN IP address, but I can't make a connection using http://192.168.10.41:8080. I've tried variations of the settings above (enabled HTTPS, enabled Remote Upgrade, set an IP range of 192.168.10.1-254) but nothing has worked yet. Hoping someone can at least point me in the right direction. Thanks. Update: To clarify, I have a wired router that connects straight to the T1 modem. It's configured to use 192.168.10.1-254 as its internal LAN range. The E4200 wireless router in question is on that LAN using 192.168.10.41 as its WAN IP address. The E4200's internal LAN range is 192.168.35.1-254. I'm not trying to access the E4200 from the Internet, I'm just trying to access it from its WAN IP address. Thanks.

    Read the article

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • Restoring a fresh home folder in a shared user domain environment

    - by Cocoabean
    I am using a tool called pGINA that adds another credential provider to my Windows 7 clients so we can authenticate campus users via campus LDAP. We have the default Windows credential providers setup to authenticate off of our Active Directory, but we have students in our classes that don't have entries in our AD, and we need to know who they are to allow them internet access. Once these LDAP users login using pGINA, they are all redirected to the same AD account, a 'kiosk' account with GPOs in place to prevent anything malicious. My concern is that my users will accidentally save personal login information or files in that shared profile, and another user may login later and have access to a previous user's Gmail account, as the AppData folder on each computer is shared by anyone logging into the kiosk user. I've looked into MS's 'roll-your-own' SteadyState but it didn't seem to have what I wanted. I tried to write a PS script to copy a pre-saved clean version of the profile from a network share, but I just kept running into issues with CredSSP delegation and accessing the share from the UNC path. Others have recommended something like DeepFreeze but I'd like to do it without 3rd party tools if possible.

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

  • Shortcut To Full Screen App In Lion

    - by omghai2u
    I postponed getting OSX Lion for as long as I possibly could. Now that I have it, I'm having lots of difficulties getting it to perform how I want. On Snow Leopard my typical setup for working was 4 spaces. I'd keep a Windows VM open on Space #4 full-screened, a Linux open on space #3, and I'd do other stuff on spaces #1 and #2. My keyboard shortcut allowed me to switch between my Windows work (Command + 4) to my Linux work (Command + 3) very quickly, and without the need for my hands to leave the keyboard (or effectively to even quit typing). Productivity was good. I see that on Lion a full-screened VM (and yes, they need to be full screened, Fusion's Unity won't cut it for what I need to do) is its own separate Desktop. I have set up 4 desktops and made my keyboard shortcuts to move between them Command + # just as before. But how do I get my full-screened VM to be one of those already existing desktops? Or, rather, how do I make a short-cut for the full-screened app?

    Read the article

  • No network connection for vmware esxi guests

    - by JavaDev
    I'm new to VMware and setting up an Esxi server as a trial with the intention of possibly virtualizing some of our servers in the near future. I have setup ESXi on a Dell poweredge server, and installed a Centos 5.6 and Ubuntu 11.04 guest os on the server. However I cannot get networking on my guest OS's. The host is connected to a network with a DHCP server via a switch and is configured with a static IP. I have the default set-up for networking on the host: both guests are connected to the default vmnic1 adapter via the virtual switch vSwitch0. One thing though, the virtual adapter shows 'Observed IP ranges' to be XXX.XXX.XXX.194-XXX.XXX.XXX.195 (I've blanked out the initial prefixes) i.e just 1 address, even though the network the host is connected to has the usual 255.255.255.0 subnet mask. On the guest machines (using DHCP) by default, I can see an eth0 interface but with no connection or assigned IP address. A physical machine connected to the network gets a DHCP lease as expected. How do I get networking working on my guest OSes? Apologies for the long-winded question.

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • How can I automatically edit an email before auto-forwarding it?

    - by Miss Cellanie
    Is there a way to automatically edit emails before forwarding them? I'm getting email notifications from Foursquare that I want to send to my phone as text messages. I know how to send messages to my number using an email address (I'm in the US and use Verizon) but I don't know how to strip out any unnecessary formatting, like HTML, before the email gets sent. What I want: Ability to strip out HTML Ability to start forwarding at a specific part of the email based on a search (e.g., I might know that Foursquare starts their messages with "Hey hey!" and only want content after that phrase occurs) Ability to truncate at 160 characters Things I've tried: I'm not using Foursquare DM pings through Twitter because I have two Twitter accounts and Twitter only allows a phone to be linked to one account at a time. I'm not willing to change which account it's linked to. I tried to work around the Twitter limitation using Google Voice, but they don't support SMS short codes. I'll compromise on the features I want if I can find a free solution that doesn't require me to set up my own server. I do think this is computer related because it will happen on my computer, not on my phone. edit My current setup: Gmail in Firefox 3.0.15 on Windows XP. I use a netbook as my only personal computer. However, if the only way to accomplish this well is to set up my own mail server or something, I would still want to know that.

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network. [Solved]

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either the Win 7 desktop or the VM's; it always returns a destination host unreachable error. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • Multiple Routers, Failover, DHCP and multiple gateways. NOT WAN-failover

    - by u_b
    I've had a look around google and this forum but could not find an answer to my question. So probably one of you can help me a little. My intended setup is: Router R1: wan connection to isp. connected backup server. provides some wireless SSID. other connected devices like printer, laptop, etc. both wired and wireless. Router R2: no wan connection to isp but connected to R2. connects mp3-streamer and music server. also serves as a wireless access point with same SSID. other than described connections only wireless connections. I would like to be able to control music even if R1 is off, e.g. with no internet connection. On the other hand I would like to access internet also in the case that R2 is off, i.e. no music access. Last but not least I would like to stream internet radio, i.e., R1 and R2 are on, and music is streamed from internet to R1 to R2 to streamer. I would like to realize all this using DHCP (also using static assignments) so i do not have to configure statically on android, laptop, etc. So my questions are: Can I make DHCP provide a list of two default gateways R1,R2? In order to make clients fallback to other gateway if currently assigned gateway is turned off? Thanks in advance, u_b

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Windows 2008 R2 DHCP Overlapping Scopes

    - by Buska
    We are trying to troubleshoot a scope overlap problem. We have multiple device types we wish to give all different ranges of a 16 bit subnet. IE. X device we wish to give 192.168.2.1-192.168.2.254/16, Y devices we wish to give 192.168.3.1-192.168.3.254/16. We are trying to accomplish this by creating different scopes and using the 60 class identifier. The problem is DHCP won't allow us to give these scopes with 16 bit masks because of the potential overlap. We aren't overlapping the address pool so why does DHCP care and can we work around this? If this isn't possible, how can i assign specific ranges by device type without creating multiple scopes? Any thoughts would be helpful. UPDATE: Entire Scope is 192.168.0.0/16 Gateway is 192.168.1.1/16 Device Hardware A - 192.168.20.1-192.168.20.254/16 Device Hardware B - 192.168.26.1-192.168.26.254/16 Device Hardware C - 192.168.85.1-192.168.85.254/16 We tried to setup multiple scopes for each device type (A,B,C) but couldn't specify a 16 bit mask as Scope A could technically overlap Scope B even thought our start and end addresses don't. I hope this makes more sense. Thanks for your thoughts.

    Read the article

  • How do I install Photoshop CS2 in Wine w/ Creative Suite Installer?

    - by kellishaver
    I'm running Ubuntu 9.10 and wanted to install Photoshop CS2 in Wine (wine1.2). From what I've read, the Photoshop installer and application should both run fine. However, I don't have a specific installer for Photoshop. The setup program on the CD is for the entire Creative Suite 2 bundle. When I try to run it, I get through the splash screen, license agreement, and language selection screens, but when I click the button to start/customize the installation, the installer dies. The Photoshop CS 2 folder on the CD has two exe files, instmsia.exe and instmsiw.exe, and I tried those, hoping to find a stand-alone Photoshop installer, but neither work. I tried downloading a trial, but my license key is apparently for the entire bundle, because it didn't work. Does anyone know of a work-around for this or a way to make the Creative Suite installer work? I'm currently running Photoshop under a WinXP VM, but it would be nice to have the option of using it via Wine, so I don't have to boot the VM every time I want to edit an image (and also reading/writing to my Ubuntu shares is really slow in Virtualbox). Thanks!

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >