Search Results

Search found 5157 results on 207 pages for 'checking'.

Page 145/207 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Windows Share permissions

    - by Armando
    I have a SQL/File server that I am replicating a File Share and SQL instance, using ArcServer RHA, to a Replica server. Everything seems to work as far as the replication of the SLQ instance and Share is concerned. When I fail over to the Replica server the DNS Host A record is modified to point to the Replica Server IP Address so if I do an NSLOOKUP on ServerA it then points to the IP Address of ServerB. Ans since the SQL instance is named the same I can still map my ODBC connections to ServerA and I can still make a SQL connection. When I try to do a \ServerA\Share I get an error that says I do not have permissions to the Share. I think this is because it uses keberose authentication and the Share is tied to the actual sever host name. I have tried puting in a CNAME and pointing it to ServerA and Disabling Strict Name Checking on ServerB as well as adding the CNAME to the OptionalNames in the registry but I am still getting the error when I have the ServerA powered off. Is there a way to reset the Authentication of the Share to use the DNS Cname?

    Read the article

  • Motorola Surfboard SB6121 modem conected to 2WIREi38HG wireless router but there's no internet access

    - by Jessica
    I have just switched to Comcast cable internet from AT&T Uverse and I was hoping to use the 2WIRE wireless router with the new Surfboard modem so I can have wireless access. I messed around with some settings and got it working for my laptop (I'm not terribly well versed in computer stuff; I think it was mostly luck) for about a week. The other day I tried to get online and there was no internet connection. I restarted the equipment with no success and then plugged the modem directly into the laptop. This worked, so I knew there was no outage. I connected the ethernet cord to the router and a second cord to my laptop and that worked, too. But when I tried again just with the wireless the laptop connects to the router, but doesn't recognize it or find an internet connection. I tried to go to http://gateway.2Wire.net to fiddle with the settings, but all I get is a Server Not Found page. I tried to check the ip address but this is really kind of over my head and I get different things checking it while plugged into only the modem vs when I plug into the router. Can anyone help? The frustrating thing is that I had it working for awhile, so I know it can do it!

    Read the article

  • Why am I experiencing random connection timeouts? (CentOS)

    - by Ryan
    I have a CentOS server setup that currently hosts several websites (all relative of each other in some form or another). As of recently throughout the day at the most random times the website speed will lag to a crawl and eventually hit a connection timeout. When I say random times this typically happens anywhere between 10am and 1pm usually, however, this morning this happened to me at 8am. I do not have a lot of familiarity with server knowledge as far as what I am looking for in this situation. What are some possible causes of why my server is slowing the websites down to a complete crawl or timing out? Are there specific things I should be checking for when this happens? I have noticed using: tail /var/log/httpd/access_log That usually when this down time occurs there are lot of IP addresses related to BingBot, Googlebot, and sometimes various bots or spiders that I am unfamiliar with. Could this be related and if so how can I avoid this from causing my websites to lag out? Thanks in advance for any help or advice. The websites that are timing out are built with PHP and use a MySQL database to display information.

    Read the article

  • Possible attack on my SQL server?

    - by erizias
    Checking my SQL Server log I see several entries like this: Date: 08-11-2011 11:40:42 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:42 Source: Logon Message: Error: 18456. Severity: 14. State: 8. Date: 08-11-2011 11:40:41 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:41 Source: Logon Message: Error: 18456. Severity: 14. State: 8. And so on.. Is this a possible attack on my SQL Server from the chineese???! I looked up the IP adress, at ip-lookup.net which stated it was chineese. And what to do? - Block the IP adress in the firewall? - Delete the user sa? And how do I protect my web server the best?! :) Thanks in advance!

    Read the article

  • How to quickly check if two columns in Excel are equivalent in value?

    - by mindless.panda
    I am interested in taking two columns and getting a quick answer on whether they are equivalent in value or not. Let me show you what I mean: So its trivial to make another column (EQUAL) that does a simple compare for each pair of cells in the two columns. It's also trivial to use conditional formatting on one of the two, checking its value against the other. The problem is both of these methods require scanning the third column or the color of one of the columns. Often I am doing this for columns that are very, very long, and visual verification would take too long and neither do I trust my eyes. I could use a pivot table to summarize the EQUAL column and see if any FALSE entries occur. I could also enable filtering and click on the filter on EQUAL and see what entries are shown. Again, all of these methods are time consuming for what seems to be such a simple computational task. What I'm interested in finding out is if there is a single cell formula that answers the question. I attempted one above in the screenshot, but clearly it doesn't do what I expected, since A10 does not equal B10. Anyone know of one that works or some other method that accomplishes this?

    Read the article

  • Windows module installer delaying login, server 2008 R2

    - by Kyle
    We updated our servers this weekend (windows updates), everything went fine except one of our terminal servers now hangs at login with the message, "waiting for windows modules installer." It eventually times out and leaves an event log message that the service has stopped unexpectedly. I have disabled the service and users can now login in a reasonable time frame. However we will need to re-enable the service in order to install further updates. I'm not sure where to start with this one, I'm an entry level admin and my colleagues are on vacation today, thank God this isn't a serious problem. Further details: -It affects all users. -The only third party software on the server is our ERP software and screwdrivers from Tricerat. -The only event log message is that the service has stopped unexpectedly. -The server manager screen does not display any information about roles it just says, "error". -The remote desktop roles all seem to be functioning properly, Remote app works as well as standard RDP. Let me know if there is any further details I can provide, I will be checking this frequently throughout the day.

    Read the article

  • ASUS laptop doesn't charge/use the battery after reinstalling Windows 7

    - by Stan
    I've done a clean install of Windows 7 x64 on an ASUS X501A laptop. The battery is detected and shows in the system tray as "plugged in, charging". However the charge level stays at 76% and if the AC cord is plugged out the laptop turns off. The laptop does not turn on without being plugged in either. Everything worked perfectly prior to reinstall. I've tried: Downloading and installing all the ASUS drivers, including the ATK ACPI driver Checking the BIOS - there do not seem to be any battery-related settings Flashing the BIOS to the latest version Uninstalling Microsoft ACPI-Compliant Control Method Battery in device manager as suggested on the internet Full power discharge/ATX reset as suggested by ASUS support: remove mains power charger, remove battery, press and hold power button for 10 seconds, reconnect battery and mains and turn on I have a feeling all this may have something to do with the EFI BIOS that comes on the laptop. During the reinstall I had to delete all partitions and start anew, because the Windows installer complained about the improper order of GPT partitions. The EFI System Partition was recreated by the installer, and I am guessing that it may be missing the particular ACPI driver needed to make the battery work. I've tried researching this, but could not come up with any useful info. I am hoping someone here may know a bit more about this and maybe help me understand what's going on and how to fix it. Barring that, I'll have to re-image the drive off an identical ASUS laptop with stock install and hope it fixes things.

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Can you see something wrong in my .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Unable to remove invalid(orphaned?) SPNs

    - by Brent
    tldr version: Renamed domain from internal.domain.com to domain.com, have 4 SPNs that am unable to remove from DC. So my domain was internal.domain-name.com and I renamed it to domain-name.com and I thought everything was good. Several days later, I start setting up my RD Gateway and am noticing issues surrounding group policy. I run dcdiag and the SystemLog part fails. Starting test: SystemLog A warning event occurred. EventID: 0x00001796 Time Generated: 08/25/2014 02:48:30 Event String: Microsoft Windows Server has detected that NTLM authentication is presently being used between clients and this server. This event occurs once per boot of the server on the first time a client uses NTLM with this server. An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:18 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:48 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:52:47 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: This made me check my AD for possible connections to the .internal domain. I found four which I remove by: setspn -D E3514235-4B06-11D1-AB04-00C04FC2DCD2/d79fa59c-74ad-4610-a5e6-b71866c7a157/internal.domain-name.com ServerName setspn -D HOST/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D GC/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D ldap/ServerName.domain-name.com/internal.domain-name.com ServerName Also, checking my dns records, there's an internal subdomain that I can delete but it comes back as well. I've tried removing the spns to no avail. Is there something I'm missing?

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • VM load and ping problems after replacing server motherboard

    - by Andre
    Recently, we had to replace the motherboard of one of our servers. The procedure was done by IBM as it had guarantee. The server runs ESXi 5.1, with several virtual machines, including our main mail server (Domino) and a file server. After the replacing the motherboard and staring the VMs, ESXi asked us if we had moved it or copied (different motherboard is like a different computer). We clicked the latter. We started each machine and after some basic reconfiguration, all of them were up. However, we have been having problems with the mail server, it has been acting really slow at times (this could be when it syncs with the secondary mail server) and we have been checking with Centreon (a Nagios frontend) that its CPU load has been a bit high at times and ping response too. There was a moment this morning in which I tried connecting via SSH console and it was really slow to show login and basic commands like ifconfig and top. This particular mail server is a CentOS 4.4.7 64-bit. The little configuring we had to do after restarting it was to configure the network connection as it was resolving through DHCP. Our mail software is Lotus Notes server 9. Do you know of any way in which this replacement may be causing these difficulties, and how to fix it? Thanks.

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • XAMPP server giving 404 error when requested by ipv4 connection

    - by boyb
    This is in reference to a previous question that I asked and was answered by womble. http://serverfault.com/a/406280/127729 So, now we have the real DNS records, we can do some diagnosis. dig for both A and AAAA on akosiboybastos.broker.freenet6.net gives a valid response, with an appropriate address. Good. dig for both A and AAAA on bastosforum.strangled.net gives the same responses (with a CNAME response thrown in). Also good. This means that the problem is not DNS-related, as those records are in order. wget -6 bastosforum.strangled.net/ gives a 200 OK response. wget -4 bastosforum.strangled.net/ gives a 404 Not Found response. This means that your webserver is misconfigured so that it's not serving the response you desire on IPv4. Given that the initial DNS problem asked in this question has been solved, I would recommend posting a new question with relevant webserver-related configuration, if you can't determine the configuration error yourself. I am using XAMPP(latest version) running phpbb3.0.10 via ipv6 tunnel from freenet6 and my domain is akosiboybastos.broker.freenet6.com, nothing fancy with the installation just out of the box install(with a few cosmetic mod). Both ipv4 and ipv6 traffic can connect using that url, but when I try to put a CNAME record on my test domain which is bastosforum.strangled.net pointing it to akosiboybastos.broker.freenet6.com only ipv6 can connect. As suggested by womble, this is a misconfigured webserver. To be honest I don't know where to start checking on the server as it is fully working if you use the domain given by freenet6 (akosiboybastos.broker.freenet6.com), any info on how to go about this server issue is welcome as i'm really a noob when it comes to computers. regards boyb

    Read the article

  • Noisily rendered text in Firefox

    - by Notinlist
    It came to me in a week or so that certain pages (Facebook, StrackOverflow, some news sites) have text rendering errors in Firefox. As a workaround if I refresh the page, or simply select and deselect the buggy text, the unpleasant effect disappears. I don't have this effect in Internet Explorer or in any of my desktop applications. Windows 7 Pro 64bit (fresh) Firefox 19.0.2 (fresh) Ati Radeon HD 4600 Series (fresh drivers) Thanks for the help in advance! Update 1/2 I have only three addons: Forecastfox, Hungarian spell checking dictionary and Quick locale switcher. The latter two are installed after the effect appeared. I disabled the first individually and did not helped. But if I start my firefox with disabled addons I cannot reproduce the error. As far as I know this mode does not mean disabled plugins, which I do have (Adobe Acrobat, Citrix ICA Client, Google Earth plugin, Google update, Java Deployment Toolkit 6, MS Office 2010, MS Windows Media Player Firefox, Shockwave Flash, Silverlight, VLC Web). Update 2/2 If I disable all plugins and extensions I still have the problem. If I start Firefox with disabled addons then I cannot reproduce the problem.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >