Search Results

Search found 2046 results on 82 pages for 'agent ransack'.

Page 63/82 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How to debug modsecurity_audit_log

    - by max87
    I was accessing www.example.com/RestAPI/index.php/tweets.json in my server. The modsec_audit.log showed the following error, but there is no related errors/warnings in modsec_debug.log. I could see the Internal Server error is logged in example-error_log. How can I debug this Internal Server error? --8560e90b-A-- [21/Mar/2012:07:01:52 +0000] T2l84H8AAAEAAGxPZ@QAAAAG x.x.x.x 33101 x.x.x.x 80 --8560e90b-B-- GET /RestAPI/index.php/tweets.json HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Cookie: __utma=159129855.1463065063.1331789485.1331789485.1331789485.1; __utmz=159129855.1331789485.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 8cb6a414cf5ec1919864de0e80bea4da=0es7dcu0p10cocfpferb2lddi0; 8926e4f3c475bb6fcacb409299f1bd27=53cf8c5e6bf78ea45096945377e6d609 Connection: keep-alive Cache-Control: max-age=0 --8560e90b-F-- HTTP/1.0 500 Internal Server Error X-Powered-By: PHP/5.3.5 Content-Length: 0 Connection: close Content-Type: text/html; charset=UTF-8 --8560e90b-H-- Apache-Handler: php5-script Stopwatch: 1332313312358005 130428 (- - -) Producer: ModSecurity for Apache/2.5.12 (http://www.modsecurity.org/); core ruleset/2.0.5. Server: Apache --8560e90b-Z--

    Read the article

  • Apache httpd: Send error logs to syslog and local disk? Without touching /etc/syslog.conf?

    - by Stefan Lasiewski
    I have an Apache httpd 2.2 server. I want to log all messages using syslog, so that the requests are sent to our central syslog server. I also want to ensure that all log messages are sent to local disk, so that a sysadmin can have easy access to the log files on the local system. It is easy to send HTTP access logs to both the local disk and to syslog. One common method is: LogFormat "%V %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog logs/access_log combined CustomLog "|/usr/bin/logger -t httpd -i -p local4.info" combined But it is not easy to do this for error logs. The following configuration doesn't work, because the error logs only use the last ErrorLog stanza. The first ErrorLog stanza is ignored. ErrorLog logs/error_log ErrorLog syslog:local4.error How can I ensure that Apache errors logs are written to the local disk and are sent to syslog? Is it possible to do this without touching /etc/syslog.conf ? I am fine if my users want to manage their own Apache configuration files, but I do not want them touching system files such as /etc/syslog.conf

    Read the article

  • How can I setup a Proxy I can sniff traffic from using an ESX vswitch in promiscuous mode?

    - by sandroid
    I have a pretty specific requirement, detailed below. Here's what I'm not looking for help for, to keep things tidy and on topic: How to configure a standard proxy Any ESX setup required to facilitate traffic sniffing How to sniff traffic Any changes in design (my scope limits me) I need to setup a test environment for a network-sniffing based HTTP app monitoring tool, and I need to troubleshoot a client issue but he only has a prod network, so making changes to the config on client's system "just to try" is costly. The goal here is to create a similar system in my lab, and hit the client's webapp and redirect my traffic - using a proxy - into the lab environment. The reason I want to use a proxy is so that only this specific traffic is redirected for all to see, and not all my web traffic (like my visits to serverfault :P). Everything will run inside an ESX 4.1 machine. In there, there is a traffic collection vswitch in promiscuous mode that is not on the local network for security reasons. The VM containing our listening agent is connected to this vswitch. On the same ESX host, I will setup a basic linux server and install a proxy (either apache + mod_proxy or squid, doesn't matter). I'm looking for ideas on how to deploy this for my needs so I can then figure out how to set it up accordingly. Some ideas I've had were to setup two proxies, and have them talk to eachother through this vswitch in promiscuous mode, but it seems like alot of work. Another idea is a dual-homed proxy, but I've never seen/done that before so I'm not sure how doable it is for what I'd like. I am OK with setting up a second vswitch in promiscuous mode to facilitate this if need be, but I cannot put the vswitch on the lan (which is used so my browser would communicate with the proxy) in promiscuous mode. Any ideas are welcome.

    Read the article

  • pptp server 2003 hands out gateway from nic not dhcp server

    - by Pete
    I have created a pptp RRAS server for a handful of clients to connect to. I would like them to use the servers default gateway (.1) for internet access. They are able to successfully connect (& see LAN) but it then cuts them off the internet. I understand that all internet traffic would be routed through the pptp server but that's ok since I have enough pipe. The problem seems to be that: the clients gateway shows as their assigned RAS ip. The clients assigned DNS settings seem to be what is set to the servers nic not what I have specified in dhcp (which is the same server). DHCP relay agent properties points to the nic DHCP is running on (192.168.100.163). .1 is gateway in nic hw properties & dhcp. I have different dns secondary & third entries on my nic properties than what dhcp is configured for. The problem is that I have a 10.10.1.x network that people can not see if they uncheck the gateway option but, they are then unable to see our other hosted sites on the internet.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

  • ssh Prompts For Password After Account Unlocked - Despite ssh key?

    - by user1011471
    Here's what happened: I set up ssh key so that user could ssh from A to B without a password. I got user's password wrong in some other context too many times, and user's account got locked out. (IT uses Active Directory here) IT unlocked the account. Concurrent to the unlocking, a script was running, calling something like ssh user@B some-health-check-command every 5 seconds or so -- which seemed to work fine before I caused user to get locked out in step 2. IT reports user reliably gets locked out a short time after each unlock attempt. I thought the ssh key would allow ssh user@B some-command as long as the account is not locked. But it behaves as if, when user gets unlocked, B suddenly asks for a password and since my command repeatedly runs without supplying a password, the account gets locked out after 5 attempts. Account cannot be accessed at this time. Please contact your system administrator. My questions are... Is that what's happening? Or: what's happening? More importantly: How can I reconfigure things such that my script doesn't cause problems? Can I accomplish what I want without having to install Expect? (I don't know if I have permission to do so) Other notes: Not using ssh-agent currently. The ssh command is running on our Jenkins master, a linux box. A and B are Mac OS X. user is managed in Active Directory and normally can sign into all three machines. Other than these things and the ssh key I set up, everything else has the default configuration as far as I know.

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Xubuntu stuck after login

    - by viraptor
    How can I debug an issue with Xubuntu 12.04 (fresh install) which just waits idle after a login for about 30 seconds? The login screen is delayed correctly. After login, I get my desktop background, but no panels or auto-starting apps. It doesn't seem to be an authentication/pam issue, because I can login without delay at the console while the graphical session is still stuck. There's no disk or cpu activity and no obvious respawning of any process when I look at htop. There's nothing obviously wrong in .xsession-errors. Most interesting errors: openConnection: connect: No such file or directory cannot connect to brltty at :0 WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-wFn4VR/pkcs11: No such file or directory ... (polkit-gnome-authentication-agent-1:2131): polkit-gnome-1-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The nam e org.gnome.SessionManager was not provided by any .service files ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ... (xfce4-indicator-plugin:2176): libindicator-WARNING **: IndicatorObject class does not have an accessible description. ... (xfce4-indicator-plugin:2176): Indicator-Application-WARNING **: Unable to get application list: Operation was cancelled Bootchart seems to end before I login, so it's not that helpful. Where else can I look for information?

    Read the article

  • Home Network Stopped Working

    - by James
    I have a home network where a machine running Windows 7 Ultimate N acts as a central hub for other devices to access media. This has been running for around 2 years now and there has been no recent configuration changes. The machine has a static IP address (192.168.0.3), which also has not been changed. A few laptops, Sonos music system and mobiles are using the machine for music/video mostly. Additionally, post 3389 was also open for RDP. I used a no-ip agent to map a hostname so I could RDP to the machine from the internet. As of yesterday when I try to ping the machine, I get PING:Transmit Failed, General Error I noticed however, the IP it is pinging is 0.0.2.233. All shares etc are no longer functioning including RDP. On the machine itself, an IP config shows like nothing has changed. It still shows the expected IP. If I ping itself from its hostname, I get the same error as above. machine has been rebooted, the router has been also. Any ideas where to even start on this?

    Read the article

  • Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac.

    - by Rob
    Can anyone advise me of a really good, readable, Windows 7 TCP/IP network share guide, book, or reference. I want this because I cannot mount my Lacie 2big ethernet network drive in Windows 7 (32 bit home), but I can mount it in Windows XP Home 32bit, Ubuntu Linux 10.04 and Apple MacOS X. This drive is being mounted via the accompanying Lacie Ethernet Agent in XP (which I believe uses "Bonjour" protocol), on Mac and Linux it works without further need for software. Another Super User user has the same problem, but no answer: Trouble accessing network drives in Windows 7 I hope my take on the question shows a better willingness to investigate and do some digging - and therefore invite some suggestions to help with this. The drive is detected by Windows 7 (i.e. speech bubble "network drive found") but on trying to open an Explorer window, this remains blank with the Windows busy pointer. I'd prefer not to reinstall Windows 7 to see if that cures the problem, I'd rather understand what is happening/not happening, perhaps even compare differences with Windows XP. Suggestions, please for such guides or even the original problem itself. Update Edit Rewrote question more comprehensively here: Mhttp://superuser.com/questions/304209/looking-for-definitive-answer-to-accessing-a-network-share-via-windows-7-home-and

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Need help trying to allow my remote PowerShell script to run on my Windows 2008 r2 Server

    - by Pure.Krome
    I've got a Windows 2008 r2 server with Sql Server 2008 r2 installed. I've got a Sql Server Agent which tries to run a Powershell job, but fails :- Message Executed as user: FooServer\SqlServerUser. A job step received an error at line 1 in a PowerShell script. The corresponding line is '& '\\polanski\Backups\Database\7ZipFooDatabases.ps1'' Correct the script and reschedule the job. The error information returned by PowerShell is: 'File \\polanski\Backups\Database\7ZipFooDatabases.ps1 cannot be loaded. The file \\polanski\Backups\Database\7ZipFooDatabases.ps1 is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details.. '. Process Exit Code -1. The step failed. Ok. So i run Powershell on that server then set the execution policy to unrestricted. To check .. PS C:\Users\theUser> Get-ExecutionPolicy Unrestricted PS C:\Users\theUser> Kewl :) but it still doesn't work :( Ok ... what happens when i try to run the powershell from the command line.... PS C:\Users\justin.adler . '\polanski\Backups\Database\7ZipMotorshoutDatabases.ps1' Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \\polanski\Backups\Database\7ZipFooDatabases.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): er..... didn't I already tell the server that ANY file can be ran? Notice the file is located at... \\polanski\Backups\Database\ So can someone make any suggestions?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • JavaScript settings are on but I still have issues with web sites using JavaScript.

    - by Mike
    I have two computers that are the same at home except I have installed Adobe Acrobat 9 Pro Extended on my main computer along with the 2010 Outlook (Beta). I have issues when I log into a website that uses pop up calendars to select the date. I pasted it below. I checked my other computer and it is fine. I've checked the Java setting and they are correct. I am at a loss. Any suggestions? Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; HPDTDF; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; eMusic DLM/4; .NET4.0C) Timestamp: Mon, 26 Jul 2010 22:03:59 UTC Message: 'CalendarPopup' is undefined Line: 390 Char: 2 Code: 0 Message: 'CalendarPopup' is undefined Line: 410 Char: 2 Code: 0

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >