Search Results

Search found 25198 results on 1008 pages for 'failed request tracing'.

Page 367/1008 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Can't change parameters of rpcss

    - by alexm
    dcdiag reports following problem on client computers: Starting test: Services Invalid service type: RpcSs on ********, current value WIN32_OWN_PROCESS, expected value WIN32_SHARE_PROCESS ......................... ******** failed test Services Tried to change type of service by command sc config rpcss type=share and from regedit by putting type=20(dec) instead of type=10(hex)- no luck...

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Windows XP SP3 - Library not registered error... IE8 not installing.

    - by Wesley
    Specs to put things in context: AMD Athlon XP 2400+ @ 1987 MHz / 2 x 512MB PC3200 DDR RAM / WD 160GB IDE HDD / 3DFuzion 128MB GeForce 6200 AGP 4x / FIC AM37 / Windows XP SP3 Just recently, I was unable to start Windows Media Player. I clicked it and the busy cursor came up, but then nothing happened. Also, I tried doing a search for a file, and same thing. It would show busy cursor then suddenly stop doing anything. I couldn't find it in the Processes of the Task Manager. (Perhaps I don't know what I'm looking for?) Also, I was trying to update my DirectX, which has been running something older than DX9 9.0c for a while now, except the installation fails due to "internal error". I think the failed DirectX installation has been like that for a while... (I remember trying to install DX9 9.0c a while back, but still failed.) The Windows programs not starting, I don't think I've ever had before... what could be the cause of these problems?! Thanks in advance. =) EDIT1: Weird thing now is that when I try to open User Accounts, I get a message saying "Wrong number of arguments or invalid property assignment." Also, when I'm trying to open services.msc, I'm getting a script error that says that "Library not registered." (Code: 0, URL: res://C:\WINDOWS\System32\mmcndmgr.dll/views.htm) Perhaps this is related to my other question, where I seemed to have an unregistered library of some sort. EDIT2: The DirectX End-User Runtime Web Installer freakishly worked and updated successfully. Now, I have focussed the question on the bigger problem. For WMP11, H&S, and Search, I click it once, get a busy icon for a second, and then nothing happens. Refer to EDIT1 for other problems. EDIT3: Seems that my problems may be related to some Internet Explorer Script Errors. So what I did was download the IE8 installer from the Microsoft website, but when I run it and get to the main portion of the installer, it just keeps looping on the Downloading step of the installation. The installer is still running, but I left it for at least 4 hours and the downloading step was still not finished. What is the problem? Also, I uninstalled Ubuntu 9.10 after these problems, but they still remain. EDIT4: I'm getting an active desktop recovery background on startup now. And within seconds my computer hangs again. EDIT3 explains main issue though.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • SSL, Nginx, X509_check_private_key:key values mismatch error

    - by Dmitry T
    I have nginx server with the SSL. I'm trying to submit ssl sertificate. After I restart nginx I'm getting error: SSL_CTX_use_PrivateKey_file(".../example.com.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) I found some solution here: http://nginx.org/en/docs/http/configuring_https_servers.html#chains but i'm not using of any bundles, Any ideas on what is the issue ? Thank you in advance, Dmitry

    Read the article

  • Can't clone gitosis-admin.git local from my snow leopard server running gitosis

    - by joggink
    I've installed gitosis as described here: http://gist.github.com/264304 One of the things that I had to adjust was to give my git user permissions to use ssh (which I did trough server admin - access - ssh and added the 'git' user). When I ssh from my local machine (mac osx) as the git user, I get this response: PTY allocation request failed on channel 0 bash: gitosis-serve: command not found Connection to 10.0.0.108 closed. Which I think is normal, because in Pro GIT the author says you should get something like this if you try ssh'ing to the server using your git user: PTY allocation request failed on channel 0 fatal: unrecognized command 'gitosis-serve schacon@quaternion' Connection to gitserver closed. So far so good, I think? Now when I try to clone my gitosis-admin.git repository using this command: $git clone [email protected]:gitosis-admin.git I get this: Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ bash: gitosis-serve: command not found fatal: The remote end hung up unexpectedly So after doing some searching, I found an answer here on serverfault claiming I should use ssh:// as protocol for my git clone (which I thought is the default git protocol?) However, when I try: $git clone ssh://[email protected]:gitosis-admin.git This is the response: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: Password: Password: Permission denied (publickey,keyboard-interactive). fatal: The remote end hung up unexpectedly When I type in my own password (because my git user has no password), I get following error: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: bash: git-upload-pack: command not found fatal: The remote end hung up unexpectedly I added my upload-pack location as followed: $git clone -u /usr/local/git/bin/git-upload-pack ssh://[email protected]:gitosis-admin.git I get the error that gitosis-admin.git isn't a git repo... Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: fatal: '[email protected]:gitosis-admin.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly I've been searching for a solution for almost a week now, and every topic I've found on the internet gives no result...

    Read the article

  • Why does traceroute take much longer than ping?

    - by PHP
    How to explain this? C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [64.233.189.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 7 ms <1 ms <1 ms reserve.cableplus.com.cn [218.242.223.209] 3 108 ms 135 ms 163 ms 211.154.70.10 4 * * * Request timed out. 5 2 ms * 1 ms 211.154.64.114 6 1 ms 1 ms 1 ms 211.154.72.185 7 1 ms 1 ms 1 ms 202.96.222.77 8 2 ms 1 ms 2 ms 61.152.81.145 9 1 ms 2 ms 1 ms 61.152.86.54 10 1 ms 1 ms 1 ms 202.97.33.238 11 2 ms 2 ms 2 ms 202.97.33.54 12 2 ms 1 ms 2 ms 202.97.33.5 13 33 ms 33 ms 33 ms 202.97.61.50 14 34 ms 34 ms 34 ms 202.97.62.214 15 34 ms 186 ms 37 ms 209.85.241.56 16 35 ms 35 ms 44 ms 66.249.94.34 17 34 ms 34 ms 34 ms hkg01s01-in-f104.1e100.net [64.233.189.104] Trace complete. So average time should be :1+7+108+2+1+1+2+1+1+2+2+33+34+34+35+34+34+35+34,which is a lot bigger than ping C:\Documents and Settings\Administrator>ping google.com Pinging google.com [64.233.189.104] with 32 bytes of data: Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Ping statistics for 64.233.189.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 34ms, Maximum = 34ms, Average = 34ms

    Read the article

  • Problems with apache svn server (403 Forbidden)

    - by mrlanrat
    Iv recently setup a SVN server on my papache webserver. I installed USVN http://www.usvn.fr/ to help manage the repositories from a web interface. When I create a repository and try to import code into it from netbeans i get the following error: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to PROPFIND request for '/svn/python1' I know i have the username and password correct (and I have tried different users) I have done some research and it seems that it is most likely an Apache svn error. Below is the config file for this virtualhost. <VirtualHost *:80> ServerName svn.domain.com ServerAlias www.svn.domain.com ServerAlias admin.svn.domain.com DocumentRoot /home/mrlanrat/domains/svn.domain.com/usvn/public ErrorLog /var/log/virtualmin/svn.domain.com_error_log CustomLog /var/log/virtualmin/svn.domain.com_access_log combined DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory "/home/mrlanrat/domains/svn.domain.com/usvn"> Options +SymLinksIfOwnerMatch AllowOverride All Order allow,deny Allow from all </Directory> <Location /svn/> ErrorDocument 404 default DAV svn Require valid-user SVNParentPath /home/mrlanrat/domains/svn.domain.com/usvn/files/svn SVNListParentPath on AuthType Basic AuthName "USVN" AuthUserFile /home/mrlanrat/domains/svn.domain.com/usvn/files/htpasswd AuthzSVNAccessFile /home/mrlanrat/domains/svn.domain.com/usvn/files/authz </Location> </VirtualHost> Can anyone point out what I may have done wrong and how to fix it? I have tested with changing file permissions and changing the configuration with no luck. Thanks in advance!

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • WebDav rename fails on an Apache mod_dav install behind NginX

    - by The Daemons Advocate
    I'm trying to solve a problem with renaming files over WebDav. Our stack consists of a single machine, serving content through Nginx, Varnish and Apache. When you try to rename a file, the operation fails with the stack that we're currently using. To connect to WebDav, a client program must: Connect over https://host:443 to NginX NginX unwraps and forwards the request to a Varnish server on http://localhost:81 Varnish forwards the request to Apache on http://localhost:82, which offers a session via mod_dav Here's an example of a failed rename: $ cadaver https://webdav.domain/ Authentication required for Webdav on server `webdav.domain': Username: user Password: dav:/> cd sandbox dav:/sandbox/> mkdir test Creating `test': succeeded. dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 dav:/sandbox/> move test newtest Moving `/sandbox/test' to `/sandbox/newtest': redirect to http://webdav.domain/sandbox/test/ dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 For more feedback, the WebDrive windows client logged an error 502 (Bad Gateway) and 303 (?) on the rename operation. The extended logs gave this information: Destination URI refers to different scheme or port (https://hostname:443) (want: http://hostname:82). Some other Restrictions: Investigations into NginX's Webdav modules show that it doesn't really fit our needs, and forwarding webdav traffic to Apache isn't an option because we don't want to enable Apache SSL. Are there any ways to trick mod_dav to forward to another host? I'm open to ideas :).

    Read the article

  • nginx proxypass content 404s when adding caching location block

    - by Thermionix
    Below is my nginx conf - the location block for adding expires max to content is causing issues with content from the /internal proxied sites. nginx error log; 2011/11/22 15:51:23 [error] 22124#0: *2 open() "/var/www/internal/static/javascripts/lib.js" failed (2: No such file or directory), client: 127.0.0.1, server: example.com, request: "GET /internal/static/javascripts/lib.js?0.6.11RC1 HTTP/1.1", host: "example.com", referrer: "https://example.com/internal/" browser error; lib.js Failed to load resource: the server responded with a status of 404 (Not Found) commenting out the expires max location block allows the proxied sites to work as intended. Config files; proxy.conf location /internal { proxy_pass http://localhost:10001/internal/; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; } location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • ubuntu 9.10 cups server cupsd.conf

    - by aaron
    i have a cups server running on ubuntu 9.10 on my home network. right now i can access it at 192.168.1.101:631, but when i try to access it at myservername.local:631 i get a 400 Bad Request. here's the relevant section from my current cupsd.conf: ServerName 192.168.1.101 # Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock # any of the below 'Listen' directives all yield the same result Listen 192.168.1.101:631 #Listen *:631 #Listen myservername.local:631 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd BrowseAddress 192.168.1.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order deny,allow Deny from All Allow from 127.0.0.1 Allow from 192.168.1.* </Location> # Restrict access to the admin pages... <Location /admin> Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> i get the following in /var/log/cups/error_log: E [03/Jan/2010:18:33:41 -0600] Request from "192.168.1.100" using invalid Host: field "myservername.local:631" what do i need to do to be able to access the cups server at both 192.168.1.101:631 and myservername.local:631?

    Read the article

  • Disabling DNS Registration on Server 2008 R2

    - by WaldenL
    I want to tell a server 2008 R2 machine to NOT register it's IP addresses in DNS. I go into the Advanced tab on IPv4 and turn off "Register this connection's addresses in DNS" simple! But... the addresses are updated in DNS anyway! And actually the A record is eventually removed from the DNS server. I've confirmed that the checkbox is off by looking at it myself, and by checking the RegistrationEnabled registry value for that adapter. Both confirm that the registration is off. I've turned of DNS debug logging on the DNS server and I can see DNS Update requests coming from the server in question! This should not happen. What's even odder is that eventually (several hours) the A record for the server (which I added by hand!) is removed from the DNS server. I've also confirmed that scavaging is off on both DNS servers in the domain. Ideas? Edits: Per the comment: The server has static IP addresses. However, it's got two of them on one adapter. Since I'm in a VM (HyperV) environment I just spun up a second adapter and moved the second IP to the second adapter. I set the first adapter to auto-register (since that's the IP I want anyway) and the second adapter to NOT auto-register. We'll see if this is any better. Not any better. On a reboot of the server the registration was removed from DNS. Seems both cards are still contacting the server. Based on the DNS log the card that shouldn't register in DNS is registering a 'delete' request. And then the card that should register is registering an add request but that's ignored. I'm totally confused at this point.

    Read the article

  • unable to start wamp server

    - by jayesh
    problem to start wamp server error log is [Sat Nov 12 22:01:00 2011] [notice] Apache/2.2.17 (Win32) PHP/5.3.5 configured -- resuming normal operations [Sat Nov 12 22:01:00 2011] [notice] Server built: Oct 18 2010 01:58:12 [Sat Nov 12 22:01:00 2011] [notice] Parent: Created child process 3228 [Sat Nov 12 22:01:01 2011] [notice] Child 3228: Child process is running [Sat Nov 12 22:01:01 2011] [crit] (OS 10022)An invalid argument was supplied. : Child 3228: setup_inherited_listeners(), WSASocket failed to open the inherited socket. [Sat Nov 12 22:01:01 2011] [crit] Parent: child process exited with status 3 -- Aborting.

    Read the article

  • Why can I not get a WDS-originated PXE boot to progress past the first file download?

    - by Jeff Shattock
    I'm trying to work out an automated Windows install process, and thought I'd give WDS a look. After some promising initial progress, I seem to have hit a wall. I imported the boot and install WIMs, and created the capture WIM successfully. However, whenever I try to PXE boot the reference machine against the WDS server, it kinda craps out. It finds the server and downloads WDSNBP.COM successfully, and then gives the message "TFTP download failed." According to WireShark, the only communication between the WDS box and the client box is the successful TFTP request and download of boot\x86\WDSNBP.COM. No further requests are sent. The WDS log on the server shows the same thing, one successful download and no more activity. I've tried every combination of the following, with exactly zero change in behaviour: Win Server 2008R2 vs 2012 vs 2012R2 WDS virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Client virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Every network adaptor type offered by the virtualization platforms. "Actual" network vs isolated, virtual network. MS DHCP server vs Linux isc-dhcp-server Joined to a domain vs Stand-alone I tried changing the boot filename in DHCP to pxeboot.com instead, and it has no problem downloading that file instead, but it then crabs about Boot\BCD being corrupted. Also, with 2012, it doesnt appear that WDSNBP.com does the architecture detection, or at least does'nt report that it did. 2008 reports that it found x64, and then errors. I find myself out of things to check, and I dont see anything immediately wrong. Where do I go from here? WDS server is at 192.168.1.50, DHCP/DNS at 192.168.1.7. Console of the client computer after the boot: MAC: 52:54:00:28:94:0E UUID: blah blah Searching for server (DHCP)..... Me: 192.168.1.155, DHCP: 192.168.1.7, Gateway 192.168.1.1 Loading 192.168.1.50:boot\x86\wdsnbp.com ...(PXE).................done Downloaded WDSNCP... TFPT download failed Interesting parts of /etc/dhcp/dhcpd.conf on the Linux DHCP server: allow booting; allow bootp; option option-60 code 60 = string; option option-66 code 66 = string; option option-67 code 67 = string; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.110 192.168.1.253; next-server 192.168.1.50; option tftp-server-name "192.168.1.50"; option option-60 "PXEClient"; filename "boot\\x86\\wdsnbp.com"; option bootfile-name "boot\\x86\\wdsnbp.com"; }

    Read the article

  • Key-Based SSH Permission denied (publickey) Ubuntu 12-04

    - by user125176
    I have configured sshd to accept key-based ssh logins with LogLevel on DEBUG, and uploaded my public key to ~/.ssh.authorized_keys, where permissions are set as: 700 ~/.ssh 600 ~/.ssh/authorized_keys From root, I can su - USERNAME. From the client I get Permission denied (publicly). From the server Here's how it is telling me that it "Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied". Client protocol version 2.0; client software version OpenSSH_5.2 match: OpenSSH_5.2 pat OpenSSH* Enabling compatibility mode for protocol 2.0 Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 permanently_set_uid: 105/65534 [preauth] list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 [preauth] SSH2_MSG_KEXINIT sent [preauth] SSH2_MSG_KEXINIT received [preauth] kex: client->server aes128-ctr hmac-md5 none [preauth] kex: server->client aes128-ctr hmac-md5 none [preauth] SSH2_MSG_KEX_DH_GEX_REQUEST received [preauth] SSH2_MSG_KEX_DH_GEX_GROUP sent [preauth] expecting SSH2_MSG_KEX_DH_GEX_INIT [preauth] SSH2_MSG_KEX_DH_GEX_REPLY sent [preauth] SSH2_MSG_NEWKEYS sent [preauth] expecting SSH2_MSG_NEWKEYS [preauth] SSH2_MSG_NEWKEYS received [preauth] KEX done [preauth] userauth-request for user USERNAME service ssh-connection method none [preauth] attempt 0 failures 0 [preauth] PAM: initializing for "USERNAME" PAM: setting PAM_RHOST to "USERHOSTNAME" PAM: setting PAM_TTY to "ssh" userauth_send_banner: sent [preauth] userauth-request for user USERNAME service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth] Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 Checking blacklist file /etc/ssh/blacklist.RSA-4096 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys': Permission denied restore_uid: 0/0 temporarily_use_uid: 1001/1002 (e=0/0) trying public key file /home/USERNAME/.ssh/authorized_keys2 Could not open authorized keys '/home/USERNAME/.ssh/authorized_keys2': Permission denied restore_uid: 0/0 Failed publickey for USERNAME from IPADDRESS port 57523 ssh2 Connection closed by IPADDRESS [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Resetting TCP/IP Settings on Windows 8

    - by cpx
    Apparently, there's a command: netsh int ip reset which is known to reset TCP/IP settings for Windows XP/Vista/7 and I'm not sure how it works correctly on Windows 8. When I typed the command under command prompt (Admin) I got two errors as pointed out in the screenshot: Resetting , failed. Access is denied. At the end it said Resetting , OK! So, how do I confirm if it worked or not?

    Read the article

  • postfix cannot getting my domain name?

    - by Kossel
    Hi I'm trying to setup webmin+postfix+dovecot+roundcube, for this moment I want things be as simple as possible so I'm using linux users as email accounts. I can send/receive from the same domain, I mean [email protected] can send/receive to/from [email protected] I tested smtp/imap with outlook and says no problem. if I send a mail from gmail it reject with error of: Technical details of temporary failure: The recipient server did not accept our requests to connect. when I login with roundcube the email address display in the right corner is something like user1@com and I get this error message from logs: [11-Nov-2012 07:39:03 +0400]: IMAP Error: Login failed for user1 from 187.150.xx.xx. Could not connect to com:143: php_network_getaddresses: getaddrinfo failed: Name or service not known in /var/www/webmail/program/include/rcube_imap.php on line 191 (POST /webmail/?_task=login&_action=login) it says Could not connect to com:143 looks like it cannot read the domain name. I used http://mxtoolbox.com/ to check the mx record and it says it can find the server of mail.mydomain.com. I quit sure the problema is from postfix or my server configs, but I have been looking for every config file and cannot find the answer of this. any suggestion I will appreciate. here are some of my configs (I don't want to make this question too long, I can provide any other information to solve this question): postfix main.cf #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_security_options = noanonymous smtpd_sasl_auth_enable = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = mydomain.com myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $mydomain, $myhostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + virtual_alias_domains = mydomain.com smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated myorigin = $mydomain roundcube conf // ---------------------------------- // IMAP // ---------------------------------- $rcmail_config['default_host'] = '%d'; $rcmail_config['default_port'] = 143; $rcmail_config['imap_auth_type'] = null; $rcmail_config['imap_delimiter'] = null; $rcmail_config['imap_ns_personal'] = null; $rcmail_config['imap_ns_other'] = null; $rcmail_config['imap_ns_shared'] = null; $rcmail_config['imap_force_caps'] = false; $rcmail_config['imap_force_lsub'] = false; $rcmail_config['imap_force_ns'] = false; $rcmail_config['imap_timeout'] = 0; $rcmail_config['imap_auth_cid'] = null; $rcmail_config['imap_auth_pw'] = null; $rcmail_config['imap_cache'] = null; $rcmail_config['messages_cache'] = false;

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nick Lin
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Adding Hyper-V Role Errors

    - by Brian
    Hello, I have a Win 2008 R2 Data Center machine, and when I added the Hyper-V role, I got the following errors: 'Hypervisor' driver required by the Virtual Machine Management service is not installed or is disabled. Check your settings or try reinstalling the Hyper-V role. Hyper-V launch failed; Either VMX not present or not enabled in BIOS. ANy help would be appreciated as I am a n00b to the server world. Thanks.

    Read the article

  • APC UPS replace battery light and apcupsd reporting "replace battery"

    - by mgjk
    We have an APC Smart UPS 1500. The "Replace Battery" light is on, and apcupsd reports: Emergency! Batteries have failed on UPS xxxx. Change them NOW However, from this article, http://sturgeon.apcc.com/kbasewb2.nsf/for+external/f39c4312fcaf7b948525679a005ebb78?OpenDocument it seems that it's not so clear that the UPS battery needs to be replaced. Stranger, according to the information on the UPS, an 11 minute runtime at 42.9% load running at 27.7V isn't so bad. Any thoughts about what to try next? We're a non-profit, money is an object. It would be a shame to replace a battery with a year or so left in it. # apcaccess status APC : 001,041,1017 DATE : Thu Mar 29 13:01:41 EDT 2012 HOSTNAME : oreilly2 VERSION : 3.14.6 (16 May 2009) debian UPSNAME : xxxx CABLE : Custom Cable Smart MODEL : Smart-UPS 1500 UPSMODE : Stand Alone STARTTIME: Thu Mar 29 12:57:30 EDT 2012 STATUS : ONLINE LINEV : 112.3 Volts LOADPCT : 42.9 Percent Load Capacity BCHARGE : 100.0 Percent TIMELEFT : 11.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds OUTPUTV : 112.3 Volts SENSE : High DWAKE : -01 Seconds DSHUTD : 090 Seconds LOTRANS : 106.0 Volts HITRANS : 127.0 Volts RETPCT : 000.0 Percent ITEMP : 23.8 C Internal ALARMDEL : Always BATTV : 27.7 Volts LINEFREQ : 60.0 Hz LASTXFER : No transfers since turnon NUMXFERS : 0 TONBATT : 0 seconds CUMONBATT: 0 seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x07000008 Status Flag SERIALNO : AS0603298896 BATTDATE : 2006-01-14 NOMOUTV : 120 Volts NOMBATTV : 24.0 Volts FIRMWARE : 601.3.D USB FW:1.5 APCMODEL : Smart-UPS 1500 END APC : Thu Mar 29 13:02:12 EDT 2012 Error when running upstest You are using a SMART cable type, so I'm entering SMART test mode mode.type = USB_UPS Setting up the port ... Hello, this is the apcupsd Cable Test program. This part of apctest is for testing Smart UPSes. Please select the function you want to perform. 1) Query the UPS for all known values 2) Perform a Battery Runtime Calibration 3) Abort Battery Calibration 4) Monitor Battery Calibration progress 5) Program EEPROM 6) Enter TTY mode communicating with UPS 7) Quit Select function number: 2 First ensure that we have a good link and that the UPS is functionning normally. Simulating UPSlinkCheck ... YWrote: Y Got: getline failed. Apparently the link is not up. Giving up.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >