Search Results

Search found 2017 results on 81 pages for 'stage'.

Page 57/81 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • VLAN for WiFi traffic separation (new to VLANing)

    - by Philip
    I run a school network with switches in different departments. All is routed through to a central switch to access the servers. I would like to install WiFi access points in the different departments and have this routed through the firewall (an Untangle box that can captive-portal the traffic, to provide authentication) before it gets onto the LAN or to the Internet. I know that the ports that the APs connect to on the relevant switches need to be set to a different VLAN. My question is how do I configure these ports. Which are tagged? Which are untagged? I obviously don't want to interrupt normal network traffic. Am I correct in saying: The majority of the ports should be UNTAGGED VLAN 1? Those that have WiFi APs attached should be UNTAGGED VLAN 2 (only) The uplinks to the central switch should be TAGGED VLAN 1 and TAGGED VLAN 2 The central switch's incoming ports from the outlying switches should also be TAGGED VLAN 1 and TAGGED VLAN 2 There will be two links to the firewall (each on its own NIC), one UNTAGGED VLAN 1 (for normal internet access traffic) and one UNTAGGED VLAN 2 (for captive portal authentication). This does mean that all wireless traffic will be routed over a single NIC which will also up the workload for the firewall. At this stage, I'm not concerned about that load.

    Read the article

  • can't get to admin page after factory reset netgear wg602

    - by stefanB
    I have wireless Netgear wg602 on my home network (connected to my internet modem/router). I've had it secured and locked down to only accept connection from specific MAC addresses. I've forgotten the password that I used but my Mac Book laptops can still connect (multiple OS updates - it can't retrieve and display the password but it can use it to log in to WPA) so I want to reconfigure it from scratch (have some new devices). I tried to reset the Netgear wg602 to factory settings (pressed reset button for 10 sec), reset my laptop IP address to local address suggested in manual (192.168.0.210 net mask 255.255.255.0), connect Netgear via ethernet cable to my mac book pro but I can't get to the admin page at 192.168.0.227 as suggested by manual (firefox or safari). At this stage the Netgear is not connected to router, it is only connected to mac book. I can't ping the wireless access point either (but it is on all lights are on). What am I doing incorrectly? Last time I configured it via Windows now I only have Mac Book (which I've used with the wireless access point for 2 years so no compatibility problems).

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • Port knocking via SSH tunnels

    - by j0ker
    I have a server running in my university's internal network. There is only one SSH daemon running which is secured by port knocking with knockd. Works fine if I try to connect from within the internal network. But since the server has no external IP, I have to tunnel into the internal network every time I want to access the server from outside. And since tunneling only works for a single port I cannot do the port knocking as easily as from an internal client. In fact, I don't get it to work at all. What I'm trying is opening tunnels for all the different ports that have to be knocked. Then I send TCP-SYN packets into the tunnels. But that doesn't work even for a single port. If I establish the tunnel on the first port in the knock sequence and send a packet through it, it doesn't reach the server. There is no entry in the log file of knockd, while there should be something like 123.45.67.89: openSSH: Stage 1 (as shown with internal knocks). So I guess, the problem doesn't exist within my knocking script but is a more general one. Are there any known problems with what I'm trying to do? Is it even possible or am I missing something? Thanks in advance!

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • How to migrate WinXP from failing old HD to new one

    - by Péter Török
    Following this issue, we have all our important data backed up now. I also bought and installed a new replacement hard disk (WD 160GB PATA) as secondary (slave) drive. I created two primary NTFS partitions on it: a 40 GB system partition, and a 110GB data partition. In theory I could start reinstalling WinXP from scratch on the new system partition, then copying over all user data from the old drive to the new data partition. Once this is done, I could even throw away the old drive, or keep it just to see what happens. (Note: I don't want to clone the whole drive as it contains a dual boot setup with an old Linux installation which I don't need anymore, and anyway, a fresh reinstall would do WinXP good to get rid of many years' clutter.) However, I am lazy :-) The old HD is still functioning, the problem has not manifested again since. So I feel there is no need to hurry with a complete OS reinstall. What I don't know though is whether I will be able to install WinXP on the new system partition at a later stage without affecting the contents of the data partition on the same drive. If this is possible, I can just move over all our data to the new data partition to have it safe, then continue running WinXP from the old drive as long as it works. Does anyone see any problems/risks with this plan?

    Read the article

  • All of the NTFS hard links disappear, where are hardlinks stored on disk and how to recover them?

    - by Osiris
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\\windows\\system32\\PSHED.DLL C:\\Windows\\System32\\hal.dll C:\\Windows\\System32\\kdcom.dll C:\\Windows\\System32\\clfs.sys C:\\Windows\\System32\\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." **Background:** Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • All of the NTFS hard links damaged, where are hardlinks stored and how to recover them?

    - by String Xu
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\windows\system32\PSHED.DLL C:\Windows\System32\hal.dll C:\Windows\System32\kdcom.dll C:\Windows\System32\clfs.sys C:\Windows\System32\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." Background: 1. Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • Can sendmail be configured to discard routed email that has been rejected by the next hop?

    - by Guy Bolton King
    Background: We have a handful of hosts (running sendmail) acting as the MXs for a few domains each. Each domain is handled via the sendmail/cf /etc/mail/virtusertable, with a set of known recipients and a catch-all reject rule. Mail to postmaster on each host is aliased to root, and root is aliased to root+<host>@ourdomain.com. The MX for ourdomain.com is Google Apps, and [email protected] is a simple group that forwards to the admins. Google Apps will reject some emails at the SMTP stage, usually because of illegal attachments (instead of accepting them and filing them as spam). Problem: Given a particular spam email sent to a domain in a virtusertable entry: If the recipient address rejects the mail, then sendmail will try and send a DSN to the sender. If that sender also rejects the mail (because it's a falsified sender, and the MX for the sender rejects the mail as spam), then sendmail sends a DSN to the postmaster. The routing detailed above takes place, and...Google Apps rejects the mail as well. sendmail now gives up with a "savemail panic", and leaves the mail in the queue forever. Our mail queue fills up with garbage Is there any way I can get sendmail to discard messages that have been rejected by the next virtusertable hop (i.e. after step 1 in the Problem description)? Or does anyone have any other solutions to this?

    Read the article

  • Acer Aspire Touchscreen Not Responding

    - by Jerry
    I have an Acer Aspire z3101 all-in-one Touch screen running Windows 7 Home Premium. I am unable to get the screen to respond to my touch. I have checked the system and all drivers are ok according to comp. Using the settings to setup the pen and touch and to calibrate does not work. My touch brings no response at all. I had reinstalled the applications - Windows Touch Pack and Acer Touch Portal which didn't help. Today I done a complete reinstall returning comp back to factory settings and still there is no response. At first I thought that perhaps a driver or file was missing but now doubt that due to the reinstall not helping. Have you any idea what could be causing this problem? I am now at the stage of pulling my hair out and haven't had too much help from Acer. I have had the computer for just over a year now and so it is no longer under guarantee and this has me very worried due to my being unemployed. Any help is much appreciated. Thank you.

    Read the article

  • My new SSD is causing issues. How can I solve them?

    - by Allan
    Computer specs 1 TB harddisc 120 GB 520intel SSD 8 GB DDR3 RAM Athlon Phenom II x64 955 3. 2ghz DK DFI Lanparty FX7900 M3H3 motherboard ASUS ATI RADEON HD6970 2 GB I have bought a new SSD (Intel 520, 120 GB), and wanted to use this as my system disc. I removed the other harddisc, and installed the SSD with the newest firmware. And then Windows 7 I updated Windows 7 with no problems and then put back my old harddrive. I formatted that old harddrive just to clean up at the same time... So at this stage everything was perfect. My new SSD was set as Master 0 Primary it boots on it and I have 1 TB emptyu harddrive I can use for whatever I want. So far no errors at all Now here is the problem, I installed a few games and everytime I tried to play the computer would say Windows must restart because DCOM server process launcher service terminated, or it says Windows must now restart because the Plug and Play service terminated unexpectedly Most commonly this error is caused by a rootkit virus, well I have tried formatting my entire computer, and running every antivirus I could find, so that shouldn't be it. I've also read somewhere it might happen when there are hardware issues. That on the otherhand would make sense, as I just put in a new SSD. I don't expect you to know this error. I haven't found anyone who knew it yet. maybe you can me guide through what might have gone wrong when I placed in the SSD? What have I checked regarding the SSD? It displays the right name when the computer starts up It has the newest firmware Did a 'sfc /scannow' which told me everything was fine I don't know what to do from here. Everything seems to work great with the drive. when I start playing games my computer crashes.

    Read the article

  • INACCESSIBLE_BOOT_DEVICE after installing Linux on same drive

    - by kdgregory
    History: My PC was configured with two drives: an 80G on IDE 0 Primary that was running Win2K, and a 320G on IDE 0 Secondary that was running Linux (Ubuntu). I decided to pull the 80Gb drive out of the system, so dd'd the entire 80 G drive (/dev/sda) onto the 320 (/dev/sdb) -- this included the MBR and partition table. Then I pulled the drive, plugged the 320 into IDE 0 Primary, and rebooted. The Windows partition worked at this point. Then I installed Ubuntu into the remaining space on the 320. It works. However, when I try to boot into Windows, I get a BSOD with the following message: *** STOP: 0x0000007B (0x89055030,0xC000014F,0x00000000,0x00000000) INACCESSILE_BOOT_DEVICE Before the BSOD I see the Win2K splash screen, and it claims to be "starting windows" for a couple of seconds -- so it appears that the first stage boot loader is working as expected. Ditto when I try booting in Safe Mode. After reading the Microsoft KB article, I booted into the recovery console and tried running chkdsk /r. It refused to run, claiming that the drive was corrupted (sorry, didn't write down the exact error message). However, I can mount the drive from Linux, and access all files. And for what it's worth, I can scan the drive using the Linux "Disk Utility" (this is Ubuntu, the menus don't show real program names), it claims the drive to be clean. The KB article mentioned that boot.ini could be the problem, so here it is: timeout=10 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Professional" /fastdetect Any pointers on what to do next?

    Read the article

  • Reset sound volume in Windows 8

    - by Svish
    There seems to be a bug in Windows 8 causing the maximum volume to become lower than it really should be. I'm now at a stage where I put the volume up to max and the sound is still very low. I have a couple of Logitech Z-10 speakers with a display on them and when I touch the increase volume button on that it shows me the volume (but not able to increase it) is actually below middle even though Windows claims it to be maxed out. Is there any way I can reset the volume in Windows 8 so that I can get it up fully max again? A registry setting or something? Really don't want to have to reinstall windows or drivers or anything like that cause if it is a bug it'll probably happen again and I really don't want to have to do that every time this happens :p Any ideas? Manged to fix it by unplugging the usb connection to my speakers, turning the volume down on my computer and up on the speakers, and finally reconnecting the usb connection. Seems to have been an issue with the speakers and not Windows this time. BUT, I'm still curious how you can adjust/reset the Windows 8 sound volume without using the volume controls. Like, where is the value of the current volume setting(s) really stored? And can you manually adjust them?

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • Suggestion regarding pointing domains to a dedicated server.

    - by Bizz
    I recently got a dedicated server and I am still at a learning stage. So please bear with me. I wanted to have 3 domains pointed to my server, but initially I asked them the process to point me one and they responded with: Hello, I can set that rDNS for you. Please make sure the domain is pointed to our name servers at godaddy. They are: ns1.xxxxx.net ns2.xxxxx.net After this is done, please allow up to 24 hours for global propagation. Alternatively, we can host your DNS for you if you prefer. What is the domain you would like xx.xxx.xx.xx to resolve to? I then asked him to point one of my domains. They responded, Did you want us to host your dns for that domain or just an rDNS record? They also said, Hosting your DNS is a free service here. We can only do 1 domain per IP. IF you would like to purchase additional IPs, they are $1/IP per month. I personally dont want to host DNS myself. Neither mail server. I have a single IP so far. It will then start to get expensive if I want to host 25 from these guys. I am still in the trial period. Does this seem reasonable as far as pricing goes? If I want to have some one host DNS and mail server, this is getting super expensive. Email hosting from rackspace starts at $2 per mail address, but from then on, its the same if you want added features such as archiving etc; What would you suggest I do if I am on a shoe string budget but I also want to avoid hassle of doing it myself and I only have 3 domains so far and I would need few mail addresses for each of them.

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • Several web applications on a single port

    - by Nevermind
    We're developing an online browser-based game. The game itself is a plugin in the web page, that uses TCP connection to a game server, and also sends http requests to "content server" web application. This makes 3 servers total: the site itself, game server and content server. Site and content server are IIS web applications, game server is a custom application communicating over TCP with proprietary protocol. While the game is in beta stage, all these servers are physically hosted on a single machine, and distinguished by ports. For example, website is game.example.com:80, game server is game.example.com:34285 and content server is game.example.com:50000. This works OK most of the time, but some of our players have ports other than 80 closed. Is there any way to make all these application work through port 80, while still having them one one physical server? Maybe using different sub-domains? There's probably a way to make IIS forward requests to different web applications based on URL alone, but that doesn't help with game server. Edit Server is Windows Server 2008, IIS 7

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >