Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 413/503 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Nginx 502 Bad Gateway: It just won't stop

    - by David
    I have the same problem that most people seem to have with Nginx: 502 bad gateway errors. They are intermittent but typically happen more than once per session, which means my users are probably running into it nearly every time they use the app. I've tried adjusting fastcgi_buffers and fastcgi_buffer_size (in both directions) to no avail. I've tried various other things with the configuration file but nothing seems to work. Here's my config (note that I've stripped away most of the things I've tried, since they didn't work and I didn't want to bloat the file with a bunch of un-related directives): server { root /usr/share/nginx/www/; index index.php; # Make site accessible from http://localhost/ server_name localhost; # Pass PHP scripts to PHP-FPM location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Lock the site location / { auth_basic "Administrator Login"; auth_basic_user_file /usr/share/nginx/.htpasswd; } # Hide the password file location ~ /\. { deny all; } client_max_body_size 8M; } I'm running a small Rackspace cloud server, which should be plenty for handling an app with a small user base...

    Read the article

  • Migrating JBoss installation and install it on a PHP server

    - by David Martinez
    I'm configuring a new dedicated server that is going to run 3 sites, 2 of then are migrating from a old server. Each site have it's own domain and dedicated ip. 2 of this sites are already up and running on php (one of then use cakePHP), the third site is a migration from an old server and it runs on JBoss. 1) Is it possible to have both Jboss and php running on the same Apache instance, or would I have to install a new one? 2) Can I just move the old JBoss server directory to the new server and start the server with the shell script? From what I red here JBoss is distributed as a zip/tgz file with the server structure, so moving it from the old server to the new one should be the same. I want to do this because the old server is already configured, and it have 2 JBoss instances. I didn't develop this site and I don't have experience with JBoss. I have some documentation of the site, but it is not much, mostly server structure and the technology they used. The new server runs on CentOS with CPanel, I have full root access to the server. This question is similar to this one How can I run JBoss Application Server and Apache on the same server? but there he didn't have a dedicated IP for each domain.

    Read the article

  • OS X (10.6) Apache Sudden Death, Nginx not working either...

    - by Jesse Stuart
    Hi, I turned on my computer today and apache wasn't working. This is weird as its been working for the last 6 months without issue. The only thing I did which may of caused a problem, is I uninstalled a bunch of gems. This shouldn't be the issue though as apache doesn't rely on gems. I decided to give nginx a try to see if it would work and have the exact same issue. The symtoms are: I go to http://localhost and get the browsers default 404 page (not rendered by apache/nginx) No error is found anywhere (I checked all logs) Apache is rinning (also tried with Nginx) How can I debug this to find the root of the problem? I can't think of why this would be happening. I've tried repairing permissions in case this was the issue, apparently it wasn't. Everything was working the other day, and nothing changed in the apache config. Update: Here is the output of telnet localhost 80 $ telnet localhost 80 Trying ::1... telnet: connect to address ::1: Connection refused Trying fe80::1... telnet: connect to address fe80::1: Connection refused Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Cannot send email to info@ or support@

    - by user3022598
    I am trying to send email from my gmail account to a couple user accounts I have on my new Centos server. The email is setup correctly and I can send receive from accounts ok except info and support. I tried to setup two users "info" and "support" I have a php form that sends out email that works fine for all users except info and support. To test this and make sure that something did not change from yesterday i just created a new user "frank" and tried the submit form and it worked fine. From my gmail account i can email "frank" however i cannot email "info" or "support" The logs I pulled are as follows and i think i see the issue but no idea how to fix it. Aug 15 12:20:55 mail postfix/qmgr[1568]: 1815C20A83: from=, size=1815, nrcpt=1 (queue active) Aug 15 12:20:55 mail postfix/local[2270]: 1815C20A83: to=, relay=local, delay=0.28, delays=0.26/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:17:13 mail postfix/qmgr[1568]: 3C18520A7F: from=, size=1818, nrcpt=1 (queue active) Aug 15 12:17:13 mail postfix/local[2201]: 3C18520A7F: to=, orig_to=, relay=local, delay=0.28, delays=0.25/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:15:24 mail postfix/qmgr[1568]: 2F79420A79: from=, size=1813, nrcpt=1 (queue active) Aug 15 12:15:24 mail postfix/local[2155]: 2F79420A79: to=, orig_to=, relay=local, delay=0.29, delays=0.27/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) For some reason frank goes out fine, however support and info go to root? Why?

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • Can I format a usb stick on windows xp, in HFS+ format, and make the usb stick mac os x bootable?

    - by user717236
    My Intel Mac OS X computer is corrupt and I feel, at this point, I need to perform a fresh install of the OS. It consistently and automatically logs out, right after I log in. I tried logging in as the root. I tried safe boot and it wouldn't load. Anyway, the point is I want put the Mac OS X installer on a USB thumb drive and have it boot up on the Intel Mac OS X computer. Unfortunately, the computer is inaccessible, as I mentioned above. So, I have a Windows XP machine that I'm using and attempting to create a bootable USB thumb drive that's compatible with Mac OS X. I have tried transmac, macdrive, and paradox for windows -- all of which proved unable to format the usb stick in HFS+. How do I know this? Well, even though the Transmac reports that's been formatted to HFS+, Computer Management in Windows says otherwise: I even put the installer on the usb drive, after transmac reportedly formatted it properly, and the mac os x computer didn't even recognize that a USB thumb drive was inserted, via pressing the option key at boot-up time. I'm not sure what the problem is and how to actually format the drive. Can anybody offer any help? I would appreciate it. Thank you.

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Regarding partitions for dual-booting Ubuntu with pre-existing Windows 7

    - by Shasteriskt
    I have zero actual experience with configuring disk partitions and the stuff I have read for the past few hours have been confusing me a bit, so please bear with me. First of all, I'd like to explain what I'm setting to achieve: Windows 7 with: C:\ Windows 7 (pre-existing installation) D:\ Data (Already exists and has files already) Ubuntu 11 - Does not exist yet, but I already have a LiveCD in hand. \root directory for Ubuntu \home on its own partition I plan \swap on its own partition with around 8GB Here is the current situation: I have a single 500 GB hard-disk with Windows 7 x64 installed, and the current partition schemes is as follows: System Reserved: 100 MB (Primary, Active) C: 100 GB - Where Windows 7 is installed (Primary) D: 365 GB - Where my files are located, LOTS of free space (Primary) Now, I would like to shrink my D: drive and create around 40 GB of unallocated disk space for the Ubuntu installation, but here what's confusing me a bit: I'm thinking I would create an extended partition and subdivide it into 3 logical partitions for the Ubuntu setup I had in mind. (If you think my setup is a bad idea, please let me know & why. I also hope you can suggest a better one...) I am aware that I can only have up to 4 primary partitions, or 3 primary partitions with 1 extended parition max. Now, does the System Recovery portion count as one primary partition? I'm really new to these things and it is totally unclear to me. In shrinking my D: drive using Windows 7's Disk Management tool, I would get an unallocated free space which I don't know how to make an extended partition from. It seems like I can only create a primary partition from it, not an extended one. How do I go about it? (I'd also like to note, if it is of any importance, that I am trying to avoid using the option to install Ubuntu alongside Windows, and much rather prefer using the custom install where I can specify which drives I wish to use and stuff. Somehow I feel its safer that way.)

    Read the article

  • scponly worked but didn't chroot the home folder, the user can still browse the entire server.

    - by Mint
    So I followed the "Chroot and Debian" tutorial in http://sublimation.org/scponly/wiki/index.php/FAQ Then when I log into user "upload" via ssh I have no access to the command line (this is what I wanted). But then when I SFTP into the upload user I can still see all the root files (/), it didn't chroot me to just /home/upload whats going on? …. I added this to the end of my /etc/ssh/sshd_config file, then done a restart Subsystem sftp internal-sftp UsePAM yes Match User upload ChrootDirectory /home/upload AllowTCPForwarding no X11Forwarding no ForceCommand internal-sftp Then when I log into sftp I can only see my upload folder (this is what I want), but now scp doesn't work :P SCP will accept my password then: debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_NZ.UTF-8 debug1: Sending command: scp -v -t /test It will hang on that last debug message. Any help would be greatly appreciated. Note, running Debian Lenny

    Read the article

  • What permissions do I need to move a folder?

    - by isme
    In the root of my drive there exists a folder called SourceControl that contains all the working copies of all my programming projects. I would like to move the folder to my user directory (\Users\Me), but something about the permissions on the folder forbids me. I don't remember how I created the folder. When I execute the move command: MOVE \SourceControl \Users\Me I receive the following error: Access is denied. I have resolved a similar problem in the past using the Takeown utility to assign ownership of the file to me, so I tried this command next: TAKEOWN /F \SourceControl It returns the following error: ERROR: The current logged on user does not have ownership privileges on the file (or folder) "C:\SourceControl". I've just learned about the Icacls utility, which can inspect and modify file permissions. I used this command to inspect the permissions on the folder: ICACLS \SourceControl It produced this list: \SourceControl BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) I think this means that normal user accounts, like mine, have permission only to read and execute (RX) here, while administrator accounts have full control (F). I used Icacls to confer full control of the directory to my user account with this command: ICACLS \SourceControl /grant:r Me:F The command produces this output: processed file: \SourceControl Successfully processed 1 files; Failed processing 0 files Now inspection of the permissions produces this output: \SourceControl Domain\Me:(F) BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) But after this the move command still fails with the same error. Is it possible to move this folder without invoking administrator rights? If not, how should I do it as administrator?

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Wireless card overheating?

    - by Sidney
    Ok, so I've had my laptop for several years (I wanna say 4, but possibly more), it's a Toshiba Satellite. I'm running Linux mint 15, and am having a strange new issue, after several hours of running my wireless stops. It can SEE wireless networks, but refuses to connect to any of them. (On a sidenote, connecting to a router with a cable at this point works fine) The fact that it can SEE the networks make me think the card is in good condition, and it's software related The fact that it works for several hours before booting me makes me think perhaps the transmitter is getting too hot. I don't use my laptop in dusty environments, and keep it on an elevated surface (alternatively, I actively try not to let it sit on soft surfaces where the vents get covered). I spray out the cpu fan about once a year with compressed air about once a year, so I really don't think the insides should be too dirty. Finally, unfortunately, sensors only gives me CPU temps, but they run about 40-50 degrees C, which from my understanding is perfectly normal for an I3. Does anyone have any suggestions on what I can do to determine the root cause of this?

    Read the article

  • What mobile phones are viable for a "nerdy" person? [closed]

    - by Blixt
    This community wiki is for determining a list of good mobile phone choices if you are "nerdy" (definition follows.) As a point of reference, the almost two years old N95 8GB will be used. Mostly because that's what I can most relate to since I've had it since it came out. Today, iPhone and other modern mobile phones really out-shine it in usability and interface. However, it still does everything I want it to do (and here's the definition of "nerdy"; modify as needed): Syncs contacts, calendar, tasks and mail in the background Can run multiple installable applications in the background (Google Maps with Latitude, for example) Good amount of space for music etc. Lets you develop your own little toy applications (Python; not to mention it can also run an Apache server with a public URL!) Tethering! Supports Flash (maybe not very important, but it has its uses) Has a manufacturer that believes in the nerdiness! (The people at Nokia Labs make lots of cool stuff and share early versions with the community and are generally open) A high resolution screen (at least 320x240) Special hardware features such as an accelerometer and GPS What's missing from the N95 8GB but still qualifies as good, "nerdy", qualities: 7.2 Mbit/s (or faster) internet through HSDPA or HSPA+. A good web browser that can do most of the stuff a desktop browser can (especially render sites properly and run JavaScript correctly) Touch (especially multi-touch) More special hardware features such as a compass Intuitive and fluent user interface (Shiny stuff) Ability to configure it to trust root certificates of my own choice A processor fast enough to run Quake 3! =)

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Linux Software RAID1: How to boot after (physically) removing /dev/sda? (LVM, mdadm, Grub2)

    - by flight
    A server set up with Debian 6.0/squeeze. During the squeeze installation, I configured the two 500GB SATA disks (/dev/sda and /dev/sdb) as a RAID1 (managed with mdadm). The RAID keeps a 500 GB LVM volume group (vg0). In the volume group, there's a single logical volume (lv0). vg0-lv0 is formatted with extfs3 and mounted as root partition (no dedicated /boot partition). The system boots using GRUB2. In normal use, the systems boots fine. Also, when I tried and removed the second SATA drive (/dev/sdb) after a shutdown, the system came up without problem, and after reconnecting the drive, I was able to --re-add /dev/sdb1 to the RAID array. But: After removing the first SATA drive (/dev/sda), the system won't boot any more! A GRUB welcome message shows up for a second, then the system reboots. I tried to install GRUB2 manually on /dev/sdb ("grub-install /dev/sdb"), but that doesn't help. Appearently squeeze fails to set up GRUB2 to launch from the second disk when the first disk is removed, which seems to be quite an essential feature when running this kind of Software RAID1, isn't it? At the moment, I'm lost whether this is a problem with GRUB2, with LVM or with the RAID setup. Any hints?

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • (updated) Subfolder needs whitelist and standard redirect for all others

    - by Superstrong
    How can I allow access to the foo.html files in the .com/song/private/ subfolder for: a logged-in Wordpress user; or any referral domains (including subfolders) I add; or any URL on our own domain from the com/song/private folder; For all others, the user should be redirected to the corresponding public version of the Post, which is the same html filename and structured .com/song/foo.html. (The private versions uses a different template with different custom fields for each Post.) Update: Here's what I have so far: <Limit GET POST> order deny,allow deny from all allow from domain.com/song/private allow from otherdomain.com </Limit> RewriteRule ^(.*)$ ../$ [NC,L] More: Will that last rewrite rule take people back to the public version, from com/song/private/foo.html to com/song/foo.html? I found the following rule for detecting Wordpress logged-in status, but what do I put aferward with a RewriteRule, and will it work anyway? (If not, is there an alternative?) RewriteCond %{HTTP_COOKIE} !^.*wordpress_logged_in_.*$ N.B. I have added code to my root .htaccess allowing me to insert additional .htaccess files in other subfolders as needed. Copied from Stack Overflow, where they suggested I ask here.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • nginx redirect what is not coming from load balancing

    - by dawez
    I have nginx on SERVER1 that is acting as load balancing between SERVER1 and SERVER2 in SERVER1 I have the upstreams for the load balancing defined as : upstream de.server.com { # similar upstreams defined also for other languages # SELF SERVER1 server 127.0.0.1:8082 weight=3 max_fails=3 fail_timeout=2; # other SERVER2 server otherserverip:8082 max_fails=3 fail_timeout=2; } The load balancing config on SERVER1 is this one: server { listen 80; server_name ~^(?<LANG>de|es|fr)\.server\.com; location / { proxy_pass http://$LANG.server.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # trying to pass a variable in the header to SERVER2 proxy_set_header Is-From-Load-Balancer 1; } } Then in server 2 I have: server { listen 8082; server_name localhost; root /var/www/server.com/public; # test output values add_header testloadbalancer $http_is_from_load_balancer; add_header testloadbalancer2 not_load_bal; ## other stuff here to process the request } I can see the "testloadbalancer" in the response header is set to 1 when the request is coming from the load balancing, it is not present when from a direct access: SERVER2:8082 . I would like to bounce back to the SERVER1 all the direct requests that are sent to SERVER2, but keep the ones from the load balancing. So this should forbid direct access to SERVER2:8082 and redirect to SERVER1:80 .

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Macros in Excel 2010 hangs

    - by Ahmad
    I have a spreadsheet with several macros. Generally, when previously using Excel 2007, a user clicks a button and everything works as expected (calculations, some email sending & file I/O). Typically, the expected run-time is about 90 seconds. The spreadsheet is a xlsm file created with Excel 2007. With Excel 2010 however, the same user process results in a non-responsive excel and forces us to kill excel from the task manager. Some note that I have gathered so far in trying to debug this issue: When monitoring CPU usage, it seems that Excel does start the macro. CPU usage increases as expected to about 47% for a few seconds. Excel.exe than drops to 0% usage and I now have a non-responsive Excel (even after 1 hour). If I set debug break points across modules and different functions and step through the code (after clicking the button) , the process works as expected albeit much slower. To add, there were no exceptions. I am at a complete loss as to what the issue may be. I initially thought it may be the add in that is being used but that was debunked by point 2. This seems to be a very odd situation. I can provide more information if required, but I'm at wits end about the root cause could be. I need help in diagnosing and resolving this issue.

    Read the article

  • a disk read error occurred [closed]

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >