Search Results

Search found 5709 results on 229 pages for 'behind the scenes'.

Page 162/229 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - 192.168.109.15 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Any help?

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • Forward differing hostnames to different internal IPs through NAT router

    - by abrereton
    Hi, I have one public IP address, one router and multiple servers behind the router. I would like to forward differing domains (All using HTTP) through the router to different servers. For example: example1.com => 192.168.0.110 example2.com => 192.168.0.120 foo.example2.com => 192.168.0.130 bar.example2.com => 192.168.0.140 I understand that this could be accomplished using Port Forwarding, but I need all hosts running on port 80. I found some information about IP Masquerading, but I found this difficult to understand, and I am not sure if it is what I am after. Another solution I have found is to direct all traffic to Reverse Proxy server, which forwards the requests onto the appropriate server. What about iptables? I am using a Billion 7404 VNPX router. Is there a feature that this router has that can accomplish this? Are these my only options? Have I missed something completely? Is one recommended over the others? I have searched around but I don't think I am hitting the correct keywords. Thanks in advance.

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Reboot fails with "Invalid partition"

    - by Mike Clark
    My laptop can't reboot. Any time something restarts the laptop (e.g. to apply Windows updates, or Start Menu-Restart, etc), the computer sits at a black screen with the message "Invalid partition" displayed in console text. When this happens, I power off the computer, then power it back on, and it boots up fine. OK, now the history behind this: This laptop is a new Dell. The day I got it, I used gparted to reclaim 30 GB of disk space that had been allocated to a "recovery partition" in the middle of the laptop's primary drive. (I have DVDs for recovery and I didn't want to waste 30 GB of SSD space on recovery data.) So I used gparted to delete the recovery partition and resize the primary Windows partition to use up the new free space. As expected when resizing a boot partition, the computer would no longer boot. I used Windows Recovery Console to fix the boot process: FIXMBR C: FIXBOOT C: BOOTCFG /rebuild This worked fine and the computer boots up fine. But, as mentioned earlier, the laptop still can't reboot. Any idea on how to fix this without completely reformatting the disk and reinstalling Windows from scratch? It's Windows 7.

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Max. Bandwidth Cannot be Reached

    - by Poyraz Sagtekin
    I have a question regarding the bandwidth usage. I don't think it's a very technical question but I really want to know the reason behind the problem. I'm having my education in one of my university's campuses. There is a 200mbps bandwidth availability for approx. 2100 students. Our main campus has 25,000 students and 2gbps bandwidth and there is no problem with the internet connection speed in main campus. However, in my campus, we are having difficulties with the internet connection speed. I went to IT department of my university and they've showed me some graphics about the bandwidth usage. The usage of the bandwidth has never reached 200mbps and it generally is around 130-140mbps. According to this information, maximum bandwidth is never made by the users but the browsing speed is not very convincing. What can be the problem? Maybe this is a silly question but I really want to know the reason. They told me that they've reached 180mbps before.

    Read the article

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • Skype Connect as SIP/Trunk for Asterisk

    - by Kaurin
    First off: I'm not sure if this should be on superuser or here. I have recently built a few Asterisk boxes with OpenVOX FXO/FXS ports little or no trouble. My current project is building an Asterisk box with SIP trunks. My current employer insisted on getting Skype Business/Skype connect for that purpose. After reviewing the Skype Connect plan, I agreed, because I thought it is going to be straightforward: Purchase G729 licences and setup SIP trunk/trunks. Boy was i wrong :) Here is the setup: The setup is for calling US numbers only via skype (we got skype US minute bundles in skype connect) AsteriskNOW - Asterisk 1.4 + asterisk-gui Trunks: SIP Trunk configured with Skype Connect - shows as registered Users: 2 test extensions. Both work fine when calling each other, voicemail etc works fine too The asterisk box is behind a Mikrotik router which i configured to forward all relevant ports: 5060-5090 UDP, 10000-20000 UDP. When trying out an extension outside of my LAN, it worked. I could make calls to the other extension. Outgoing rule: _NXXXXXXXXX Strip:0 Prepend:+1 Use skype trunk Inbound rule: Trunk: Skype Pattern: s Destination: Extension1 (6210) Here is the output of asterisk CLI (-rvvvvv) with outgoing calls: http://pastebin.com/eWVpL72e you can see the circuit-busy response when using trunk1 (skype) When calling my Skype Connect number from the outside, I get nothing in the logs. Can anyone with Skype Connect / Asterisk experience help out? :)

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Getting a TTY in a Connectback Shell

    - by Asad R.
    I'm often asked by friends to help with small Linux problems, and more often than not I'm required to login to the remote system. Usually there are a lot of issues with making an account and logging in (sometimes the box is behind a NAT device, sometimes SSHD isn't installed, etc.) so I usually just ask them to make a connect-back shell using netcat (nc -e /bin/bash ). If they don't have netcat I can just ask them to grab a copy of a statically compiled binary which isn't that hard or time consuming to download and run. Though this works well enough for me to enter simple commands, I can't run any apps that require a tty (vi, for example) and can't use any job control functions. I managed to bypass this issue by running in.telnetd with a few arguments within the connect-back shell that would assign me a terminal and drop me to a shell. Unfortunately in.telnetd isn't usually installed by default on most systems. What's the easiest way to get a fully functional connect-back terminal shell without requiring any non-standard packages? (A small C program that does the job would be fine as well, I just can't seem to find much documentation on how a TTY is assigned/allocated. A solution that doesn't require me to plough through the source code for SSHD and TELNETD would be nice :))

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • OS/X 10.6 Bizarre login bug: Making alternative "Others..." appear. Why does this happen?

    - by bjornl
    I am studying at NUS in Singapore, and they have a mac-equipped computer lab here at school. All users (students) have our own personal accounts that we use to log in to the computers with. Sometimes when you approach a computer to log in only the alternative "thinkmac", which is the school's administrator account, I presume. Some other computers have the alternative "thinkmac" as well as "Others..." where you can input your own login credentials. One day as I sat down by a computer and there was only the "thinkmac" alternative. I was about to get up and find another one when the guy sitting next to me says - Just click 'thinkmac' - the computer will ask for your password - then hit escape to get back to the login screen. Repeat until "Others..." appear. So: If you click any user account, hit ESC to get taken back to the login screen, repeat for 5-10x, eventually the alternative "Others..." will appear. Why is this? Is there an internal counter that keeps track on how many times you have clicked a/any given user account, and after a certain threshold it displays the "Others"? What is the logical reasoning behind this?

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Why is squid breaking kerberos/NTLM auth?

    - by DonEstefan
    I'm using squid 2.6.22 (Centos 5 Default) as a proxy. Squid seems to break the authentication process for web pages when they require NTLM or Kerberos Auth. I tested with sharepoint 2007 and tried all 3 authentication methods (NTLM, Kerberos, Basic). Accessing the site without squid works in all cases. When I access the same page with squid, then only basic-auth works. Using IE or Firefox desn't make any difference. Squid itself can be used by anybody (no auth_param configured). Its a bit tricky to find solutions online, since most of the topics whirl around auth_param for authenticating users to squid rather than authenticating users to a webpage behind squid. Could anyone help? Edit: Sorry, but my first test was totally screwed up. I tested against the wrong webservers (Memo to myself: always check assumptions before testing). Now I realized that the problem scenario is completely different. Kerberos work for IE Kerberos works for Firefox (after changing "network.negotiate-auth.trusted-uris" in about:config) NTLM works for IE NTLM does NOT work in Firefox (even after changing "network.automatic-ntlm-auth.trusted-uris" in about:config) By the way: The feature that provides NTLM-passthrough in squid is called "connection pinning" and the HTTP header "Proxy-support: Session-based-authentication""

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >