Search Results

Search found 22689 results on 908 pages for 'bad request'.

Page 25/908 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Http 400 'Bad Request' and win32status 1450 when larger messages are sended to a WCF service

    - by Tim Mahy
    we sometimes receive Http 400 bad request resultcodes when posting a large file (10mb) to a WCF service hosted in IIS 6. We can reproduce this using SOAP UI and it seems that it is unpredictable when this happens. In our WCF log the call is not received, so we believe that the request does not reach the ASP.NET nor WCF runtime. This happens on multiple websites on the same machine each having their own application pool. All IIS settings are default, only in ASP.NET and WCF we allow bigger readerQuota's etc.... The win32status that is logged in the IIS log is 1450 which we think means "error no system resources". So now the question: a) how can we solve this b) (when a is not applicable :) ) which performance counters or logs are usefull to learn more about this problem? greetings, Tim

    Read the article

  • Rewrite request URI based on Host header in HAProxy

    - by DorinC
    I would like to set up HAProxy to forward HTTP requests to some backend servers but I need it to also rewrite the URI part based on the Host. I've read through the doc but it seems that reqirep isn't suitable for this purpose. Any idea if this is even possible with HAProxy? Here are the details of what I'm trying to accomplish: Requests that come in on: http://www.original-domain.com/ would be balanced between: http://server1/domains/www.original-domain.com/ ... http://serverN/domains/www.original-domain.com/ The proxy should be able to handle this for any number of domains (original-domain.com can be anything, it's not limited to a fixed set of values). For this to work HAProxy would need to rewrite a request like this: GET /original-uri HTTP/1.1 Host: original-domain.com to: GET /domains/original-domain.com/original-uri HTTP/1.1 Host: serverN and forward that request to one of the internal servers.

    Read the article

  • Logging all Firefox HTTP Request Headers?

    - by Hayek
    I'm using Ruby+Watir to request pages through Firefox. I would like to record the headers and content of every http request made through the browser. Would it be possible to configure a proxy solution to store this information, either in a file or pipe it into an application? I'm running Ubuntu x64. // Edit: I would like to store the data in logs because I would like to view it later. Preferably, I am looking for a solution that runs quietly in the background and stores the headers/content in files.

    Read the article

  • haproxy modify request path

    - by zcourts
    I'm just getting started with HAProxy and I was wondering if its possible to modify the request path for an HTTP request. One of the backend server uses Dropwizard and its assets bundle see here bundle. In my setup /xyz serves static assets /api/xyz serves REST resources With HAProxy I want requests from api.host.com/xyz to be sent to backend/api/xyz and requests from host.com to be sent to backend/ I've gotten most of that working but I can't figure out how to tell HAProxy to change the path, prepending /api/ to anything from api.host.com Is this possible or am I going about this the wrong way?

    Read the article

  • IIS6 won't respond to a request for a JS file after accessing through subdomain

    - by James
    I have a site running of www.mysite.com for example. There is a JS file I'm accessing: www.mysite.com/packages.js The first and subsequent times that I acccess that packages.js file causes no problems........until I access a sub-site like this: sub-site.mysite.com This naturally makes a request for that same packages.js....but the site hangs as it just keeps waiting and waiting for that JS file. Going back to the main site, the problem perists there. If I then rename packages.js to say packages2.js it then works in the same way. I can access the file on the main site but after I try and access it through a sub-site IIS then fails to respond to a request for that file. I realise this explanation is a little vague, but has anyone seen this sort of behaviour before? Thanks very much, James.

    Read the article

  • squid configuration change to accept http request on LAN

    - by Ratan Kumar
    installed squid + dansguardian to block adult content on my linux (ubuntu 12.10) . everything worked fine. it has blocked as expected . now the problem is i am also running an apache server for my LAN . ( kind of website ) but when accessing it via 192.168.0.1 , it says squid has blocked the connection , this is the exact error The following error was encountered while trying to retrieve the URL: http: //192.168.0.16/ Connection to 192.168.0.16 failed. The system returned: (113) No route to host The remote host or network may be down. Please try the request again. Your cache administrator is webmaster. before configuring the squid it was working fine . what changes in the squid.conf i have to make . i tried acl Safe_ports 80 allow_all Safe_ports ( i want to know how i can configure it again to listen HTTP request from LAN )

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • Return http status ok (200) on request method OPTIONS Apache

    - by jazz
    I have a apache server which uses Reverse Proxy to connect/direct to a tomcat server. Using virtualHost, RequestHeader set X-Forwarded-Proto "http" ServerName image.abc.local DocumentRoot "/var/www/html" ProxyRequests Off ProxyTimeout 600 ProxyPass /abc http://image.abc.local:9001/abc ProxyPass /xyz http://image.abc.local:9001/xyz ProxyPassReverse /abc http://image.abc.local:9001/abc ProxyPassReverse /xyz http://image.abc.local:9001/xyz what i want to achieve here is that, when there is a REQUEST_METHOD OPTIONS i want simply return HTTP status OK (200). I dont want the request to be received by the tomcat server and process it. For performance based concerns i want this request to be handled at apache level. with all the research i was still unable to get this to run; RewriteEngine on RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule .* - [R=200m] can somebody assist me with what rewrite rule should be there? or is there an alternative to RewriteEngine? Thanks

    Read the article

  • How to troubleshoot ping request time out

    - by user28317
    I have a Windows 7 and an XP machine connected to a NETGEAR wireless router. Both machines can log into the network and surf the web. Both are connecting wirelessly. I can ping the router from each machine and get a reply. I can ping each machine from the router and get a reply. But i cannot ping each machine from the other; getting a request time out. Subnet IP Addresses are 192.168.1.* Router =1; Win7 = 10; XP = 11; Firewall is currently off in both systems. Since i can ping from router im picking that not the problem anyway. If i try to ping from xp to win7 i get Request Timed Out. If i try to ping from Win7 to Xp i get destination host unreachable. What should i do now? Thanks

    Read the article

  • How to troubleshoot ping request time out [closed]

    - by user28317
    I have a Windows 7 and an XP machine connected to a NETGEAR wireless router. Both machines can log into the network and surf the web. Both are connecting wirelessly. I can ping the router from each machine and get a reply. I can ping each machine from the router and get a reply. But i cannot ping each machine from the other; getting a request time out. Subnet IP Addresses are 192.168.1.* Router =1; Win7 = 10; XP = 11; Firewall is currently off in both systems. Since i can ping from router im picking that not the problem anyway. If i try to ping from xp to win7 i get Request Timed Out. If i try to ping from Win7 to Xp i get destination host unreachable. What should i do now? Thanks

    Read the article

  • how to forward IP request to a specific port

    - by Jeremy Talus
    I have 2 servers the first (SRV01) is running Bind and other web app the second (SRV02) is running 2 server Minecraft (^^) in Bind I have 2 A recording for the 2 server MC s1.domain.tld A SRV02IP s2.domain.tld A SRV02IP the 2 MC serv are running on 2 different port 25565 and 25566 so I want that the request from s1.domain.tld:25565 are going to SRV02IP:25565 and the request from s2.domain.tld:25565 are going to SRV02IP:25566 I think I need do this in the SRV02 iptables. I have looking some topic about iptables but nothing pertinent to me. could you help me ? rgds.

    Read the article

  • Send request body data when running siege

    - by qui
    I am trying to use the command line utility Siege to load test a service. The service recieves json in the request body via a POST. I have a file called example-data.json with the json inside. I will eventually turn this into a tiny service which creates random json for testing, but this should do for now I have another file called hit-qa.siege with http://www.qa-url.com POST < example-data.json and i try and run siege -c10 -d1 -r1 -f ops/perf/hammer-dev.siege When I check the logs of the service, it is not recieving anything in the request body. My googles have been fruitless, does anyone know how to accomplish this?

    Read the article

  • Forward Request to Multiple Servers

    - by cactuarz
    We have 2 servers. One is old server and another is the new one. Currently we about doing a migration because the old server is not capable enough to handle everyday requests. The specs are: Old server Ubuntu 10.04 Nginx as Reverse Proxy Apache WSGI Python/Django New Server Ubuntu 10.04 Nginx Gunicorn Python/Django Celery+Redis Our manager asked us to research if the old server can perform multiple forwarding to all incoming request, for example, set Nginx of old server to forward all request to both old and new server. The purpose is to perform unit testing to new server using old server as comparer, see if the new server is ready to take over the role. Please help, if there is an idea, or must install some engine, or what we do is impossible. Many thanks.

    Read the article

  • Request to server x Reply from server y

    - by klaasio
    I need some advice from you guys: I'm dealing with a custom loadbalancer/software for which we will use 2 main servers and about 8 slave servers. In short: User sends request to main server, main server will receive and handle the requests, sends a request to a slave server and slave server should send data DIRECTLY to the "user". User - Main server Main server - Slave server Slave server - User The reason for which data should be send directly to the user and not through the main server is because of bandwidth and low budget. Now I have the following idea's: -IPinIP, but that is not possible in Layer7 (so far i know there some expensive routers for that) -IP Spoof, using C/C++ we will make it look like the reply came from main server. But I was thinking, perhaps the reply "slave server - User" could just come from a different IP without causing issues in the firewall from the user or his anti-virus. I don't know so well about "home" firewalls/routers and/or anti-virus software. I guess the user machine wouldn't handle it well?

    Read the article

  • proxy: no HTTP 0.9 request (with no host line)

    - by TestPlanManagement.com
    I'm getting a bunch of these errors in my error.log: [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserver hose set forcing hostname to be www.mydomain.com for uri / My config is essentially: ProxyRequests Off <VirtualHost 1.2.3.4:80> ServerName www.mydomain.com DocumentRoot "c:/apache/htdocs" ProxyPreserveHost On ProxyPass / http://172.1.1.1/ </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName www.mydomain.com DocumentRoot "c:/apache/htdocs" # SSL Stuff ProxyPreserveHost On ProxyPass / http://172.1.1.1/ </VirtualHost> Anyone have an idea how to eliminate those warnings?

    Read the article

  • ASP Web Service Not Working

    - by BlitzPackage
    Good people, Hello. Our webservice is not working. We should be receiving information via an HTTP POST. However, nothing is working. Below are the code files. Let me know what you think. Thanks in advance for any help or information you can provide. (By the way, some information (e.g. class names, connection strings, etc...) has been removed or changed in order to hide any sensitive information. Imports System.Web.Mail Imports System.Data Imports System.Data.SqlClient Imports System.IO Partial Class hbcertification Inherits System.Web.UI.Page Public strBody As String = "" Public sqlInsertStr As String = "" Public errStr As String = "" Public txn_id, first_name, last_name, address_street, address_city, address_state, address_zip, address_country, address_phone, payer_email, Price, key, invoice, payment_date, mc_fee, buyer_ip As String Dim myConn As New SqlConnection(ConfigurationSettings.AppSettings("ConnectionInfo")) '******************************************************************************************* Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load strBody += "Test email sent.Customer name: " & Request("first_name") & " " & Request("last_name") strBody += "Reg Key: " & Request("key") & "Transaction ID: " & Request("txn_id") & "Tran Type: " & Request("txn_type") updateFile(Server.MapPath("log.txt"), strBody) txn_id = Request("txn_id") first_name = Request("first_name") last_name = Request("last_name") address_street = Request("address_street") address_city = Request("address_city") address_state = Request("address_state") address_zip = Request("address_zip") address_country = Request("address_country") address_phone = Request("address_phone") payer_email = Request("payer_email") Price = Request("Price") key = Request("key") invoice = Request("invoice") payment_date = Request("payment_date") mc_fee = Request("mc_fee") buyer_ip = Request("buyer_ip") If Request("first_name") "" And Request("last_name") "" Then SendMail("[email protected]", "[email protected]", strBody, "Software Order Notification", "[email protected]") Else Response.Write("Email not sent. Name missing.") End If Dim sItem As String Response.Write("") If Request.Form("dosubmit") = "1" Then Response.Write("FORM VALS:") For Each sItem In Request.Form Response.Write("" & sItem & " - [" & Request.Form(sItem) & "]") Next sqlInsertStr += "insert into aspnet_MorrisCustomerInfo (TransactionID,FirstName,LastName,AddressStreet,AddressCity,AddressState,AddressZip,AddressCountry,AddressPhone,PayerEmail,Price,AuthenticationCode,InvoiceID,PurchaseDate,PaypalFee,PurchaseIPAddress) values ('" & SQLSafe(txn_id) & "','" & SQLSafe(first_name) & "','" & SQLSafe(last_name) & "','" & SQLSafe(address_street) & "','" & SQLSafe(address_city) & "','" & SQLSafe(address_state) & "','" & SQLSafe(address_zip) & "','" & SQLSafe(address_country) & "','" & SQLSafe(address_phone) & "','" & SQLSafe(payer_email) & "','" & SQLSafe(Price) & "','" & SQLSafe(key) & "','" & SQLSafe(invoice) & "','" & SQLSafe(payment_date) & "','" & SQLSafe(mc_fee) & "','" & SQLSafe(buyer_ip) & "')" runMyQuery(sqlInsertStr, False) End If Response.Write("sqlInsertStr is: " & sqlInsertStr) Response.Write("") End Sub '******************************************************************************************* Sub SendMail(ByVal strEmailAddress, ByVal strEmailAddress_cc, ByVal Email_Body, ByVal Email_Subject, ByVal Email_From) If Request.ServerVariables("server_name") "localhost" Then Try Dim resumeEmail As New MailMessage resumeEmail.To = strEmailAddress resumeEmail.Cc = strEmailAddress_cc resumeEmail.From = Email_From resumeEmail.Subject = Email_Subject resumeEmail.Priority = MailPriority.High 'resumeEmail.BodyFormat = MailFormat.Html resumeEmail.BodyFormat = MailFormat.Html resumeEmail.Body = Email_Body 'System.Web.Mail.SmtpMail.SmtpServer = "morris.com" System.Web.Mail.SmtpMail.SmtpServer = "relay-hosting.secureserver.net" System.Web.Mail.SmtpMail.Send(resumeEmail) Response.Write("Email sent.") Catch exc As Exception Response.Write("MAIL ERROR OCCURRED" & exc.ToString() & "From: " & Email_From) End Try Else Response.Write("TEST RESPONSE" & strBody & "") End If End Sub End Function End Class Process Data Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config -- section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. -- section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. --

    Read the article

  • MySQL SSL: bad other signature confirmation

    - by samJL
    I am trying to enable SSL connections for MySQL-- SSL will show as enabled in MySQL, but I can't make any connections due to this error: ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation I am running the following: Ubuntu Version: 14.04.1 LTS (GNU/Linux 3.13.0-34-generic x86_64) MySQL Version: 5.5.38-0ubuntu0.14.04.1 OpenSSL Version: OpenSSL 1.0.1f 6 Jan 2014 I used these commands to generate my certificates (all generated in /etc/mysql): openssl genrsa -out ca-key.pem 2048 openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem -subj "/C=US/ST=NY/O=MyCompany/CN=ca" openssl req -newkey rsa:2048 -nodes -days 3650 -keyout server-key.pem -out server-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=server" openssl rsa -in server-key.pem -out server-key.pem openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem openssl req -newkey rsa:2048 -nodes -days 3650 -keyout client-key.pem -out client-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=client" openssl rsa -in client-key.pem -out client-key.pem openssl x509 -req -in client-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem I put the following in my.cnf: [mysqld] ssl-ca=/etc/mysql/ca-cert.pem ssl-cert=/etc/mysql/server-cert.pem ssl-key=/etc/mysql/server-key.pem When I attempt to connect specifying the client certificates-- I get the following error: mysql -uroot -ppassword --ssl-ca=/etc/mysql/ca-cert.pem --ssl-cert=/etc/mysql/client-cert.pem --ssl-key=/etc/mysql/client-key.pem ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation If I connect without SSL, I can see that MySQL has correctly loaded the certificates: mysql -uroot -ppassword --ssl=false mysql> SHOW VARIABLES LIKE '%ssl%'; +---------------+----------------------------+ | Variable_name | Value | +---------------+----------------------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | /etc/mysql/ca-cert.pem | | ssl_capath | | | ssl_cert | /etc/mysql/server-cert.pem | | ssl_cipher | | | ssl_key | /etc/mysql/server-key.pem | +---------------+----------------------------+ 7 rows in set (0.00 sec) My generated certificates pass OpenSSL verification and modulus: openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem server-cert.pem: OK client-cert.pem: OK What am I missing? I used this same process before on a different server and it worked- however the Ubuntu version was 12.04 LTS and the OpenSSL version was older (don't remember specifically). Has something changed with the latest OpenSSL? Any help would be appreciated!

    Read the article

  • another "SSH connect to host github.com port 22: Bad file number"

    - by Mariusz
    Hello. I have a problem with my first-time ssh connection. Yes, I've already done yours guides, already tried your "Dealing with firewalls and proxies" article and the problem is still occuring. I am using Win7 32bit, Windows Firewall is disabled, haven't any third-party firewalls, ESET Nod32 Antivirus is not blocking any ports, I am not using any PROXY (neither local proxy) . Here goes the logs: Ordinary SSH connection try C:\Users\Mariusz>ssh -vvv [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: connect to address 207.97.227.239 port 22: Not owner ssh: connect to host github.com port 22: Bad file number NCAT connection try C:\Users\Mariusz>ncat github.com 22 Strange connect error from 207.97.227.239 (10013): No error 10013 = WSAEACCES I think that method called "smart-http-support" won't be usable for me because I haven't created repo yet. I have just GIT INIT locally, and finished at step GIT PUSH, which returns the same: ssh: connect to host github.com port 22: Bad file number fatal: The remote end hung up unexpectedly corkscrew method (first article from yours guide) . While PUTTYing (with pageant in bg), after inputing login - an error is occuring (MessageBox): Disconnected: No supported authentication methods available And in terminal such message is printing out: Server refused our key Key I have generated correctly, using ssh-keygen. I tried not method by editing ~/.ssh/config yet because I had thought that because I haven't PUSHed anything to my remote repo so I won't be able to CLONE anything. Method called ssh-forwarding is not for my, cause it "requires access to an external ssh server" and I haven't any at this time. What else could I do? Thanks in advance for any help. Mariusz.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • Parse HTTP requests through Wireshark?

    - by diogobaeder
    Hi, guys, Is there any way to parse HTTP request data in wireshark? For example, can I expose the request parameters upon an HTTP GET request (being sent by my machine), so that I don't need to read the (sometimes) truncated URL and find them by myself? I was using Tamper Data and Firebug, on my Firefox, to analyse these requests, but they're not as reliable as a stand-alone tool for monitoring my network interface, but wireshark keeps data too raw concerning HTTP flow. If you guys know any other stand-alone tool that does this (must be Linux-compatible), please tell me. Thanks!

    Read the article

  • How to dynamically set HTTP Header in Apache 2.2?

    - by Michael
    Seems like this should be easy, but I cannot figure out the syntax. In Apache, I want to use the value of an existing request header to set a new request header. Some simple non-working code that illustrates what I'd like to do: RequestHeader set X-Custom-Host-Header "%{HTTP_HOST}e" Ideally, this would make a new HTTP header in the request called "X-Custom-Host-Header" that contains the value of the existing Host header. But it does not. Perhaps I need to copy the existing header into an environment variable first? (If so, I can't figure out how to do that either.) I feel like I'm missing something obvious, but I've gone over the Apache docs and I can't figure it out. Thanks for any help.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >