Search Results

Search found 16316 results on 653 pages for 'force download'.

Page 599/653 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • 404 Error on a file that exists?

    - by Abs
    Hello all, A script makes a GET request to my URL like so: http://mydomain.com/cgi-bin/uu_ini_status_audios.pl?tmp_sid=b742be1d131c4d32237a9f1fcdca659e&rnd_id=0.2363453360320319 However, I get a 404 returned straight away: The requested URL /cgi-bin/uu_ini_status_audios.pl was not found on this server. But that script exists on my server, I can see the file! It has the correct permissions (I gave it a 777 to be sure). It is also owned by my apache user and its in the group apache. What am I missing?? Thanks for any help on this! Update I thought it would have been a htaccess (rewrite) but I don't think it is anymore. I tried putting a index.php file in there and try to access it via my URL but I can't even do that! I tried this: http://mydoamin.com/cgi-bin/index.php - same 404 error! I get this in myerror log: [Tue Sep 14 14:42:49 2010] [error] [client xx.xxx.xx.xxx] script not found or unable to stat: /var/www/vhosts/mydomain.com/cgi-bin Access_log file: xx.xxx.xx.xxx - - [14/Sep/2010:14:48:25 +0200] "GET /cgi-bin/index.php HTTP/1.1" 404 475 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.9) Gecko/20100824 Firefox/3.6.9 (.NET CLR 3.5.30729)" Update 2 My htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^blog/ - [L] RewriteCond %{HTTP_HOST} ^www\.mydomain\.com$ [NC] RewriteRule ^(.*)$ http://mydomain.com/$1 [R=301,L] RewriteRule ^search/(.*)/(.*)/(.*)/(.*) /search.php?searchfor=$1&sortby=$2&page=$3&searchterm=$4 RewriteRule ^confirmemail/(.*) /confirmemail.php?code=$1 RewriteRule ^resetpassword/(.*) /resetpassword.php?code=$1 RewriteRule ^resendconfirmation/(.*) /resendconfirmation.php?userid=$1 RewriteRule ^categories/ /categories.php RewriteRule ^([-_~*a-zA-Z0-9]+)(\/)?$ /memberprofile.php?username=$1 RewriteRule ^browse/audios/(.*)/(.*)/(.*)/(.*) /audios.php?sortby=$1&filter=$2&page=$3&title=$4 RewriteRule ^browse/categories/audios/(.*)/(.*)/(.*)/(.*) /categoryaudios.php?sortby=$1&filter=$2&page=$3&title=$4 RewriteRule ^audios/(.*)/(.*) /playaudio.php?audioid=$1&title=$2 RewriteRule ^download/audio/(.*)/(.*) /downloadaudio.php?AUDIOID=$1&title=$2 RewriteRule ^members/audios/(.*)/(.*) /memberaudios.php?pid=$1&username=$2 RewriteRule ^syndicate/audios/(.*)/(.*) /syndicateaudios.php?filter=$1&title=$2 </IfModule> Update 3 [root@smydomain ~]# ls -la /var/www/vhosts/mydoamin.com/httpdocs/cgi-bin/ total 60 drwxr-xr-x 3 apache root 4096 Sep 14 14:37 . drwxr-x--- 20 som psaserv 4096 Sep 14 14:40 .. drwxr-xr-x 2 apache root 4096 Sep 7 03:01 configs -rwxrwxrwx 1 apache root 4 Sep 14 14:37 index.php -rwxrwxrwx 1 apache apache 6520 Sep 7 03:01 uu_ini_status_audios.pl -rwxr-xr-x 1 apache root 3215 Sep 7 03:01 uu_lib_audios.pl -rwxr-xr-x 1 apache root 30249 Sep 7 03:01 uu_upload_audios.pl

    Read the article

  • Looking for Fiddler2 help. connection to gateway refused...just got rid of a virus?

    - by john mackey
    I use Fiddler2 for facebook game items, and it's been a great success. I accessed a website to download some dat files I needed. I think it was eshare, ziddu or megaupload, one of those. Anyway, even before the rar file had downloaded, I got this weird green shield in the bottom right hand corner of my computer. It said a Trojan was trying to access my computer, or something to that extent. It prompted me to click the shield to begin anti-virus scanning. It turns out this rogue program is called Antivirus System Pro and is pretty hard to get rid of. After discovering the rogue program, I tried using Fiddler and got the following error: [Fiddler] Connection to Gateway failed. Exception Text: No connection could be made because the target machine actively refused it 127.0.0.1:5555 I ended up purchasing SpyDoctor + Antivirus, which I'm told is designed specifically for getting rid of these types of programs. Anyway, I did a quick-scan last night with spydoctor and malware bytes. Malware picked up 2 files, and Spydoctor found 4. Most were insignificant, but it did find a worm called Worm.Alcra.F, which was labeled high-priority. I don’t know if that’s the Anti-Virus Pro or not, but SpyDoctor said it got rid of all of those successfully. I tried to run Fiddler again before leaving home, but was still getting the "gateway failed" error. Im using the newest version of firefox. When I initially set up the Fiddler 2.2.8.6, I couldn’t get it to run at first, so I found this faq on the internet that said I needed to go through ToolsOptionsSettings and set up an HTTP Proxy to 127.0.0.1 and my Port to 8888. Once I set that up and downloaded this fiddler helper as a firefox add-on, it worked fine. When I turn on fiddler, it automatically takes my proxy setting from no proxy (default) to the 127.0.0.1 with Port 8888 set up. It worked fine until my computer detected this virus. Anyway, hopefully I've given you sufficient information to offer me your best advice here. Like I said, Spydoctor says the bad stuff is gone, so maybe the rogue program made some type of change in my fiddler that I could just reset or uncheck or something like that? Or will I need to completely remove fiddler and those dat files and rar files I downloaded? Any help would be greatly appreciated. Thanks for your time.

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • Howto Nginx + git-http-backend + fcgiwrap (Debian Squeeze)

    - by brainsqueezer
    I am trying to setup git-http-backend with Nginx but after 24 hours wasting time and reading everything I could I think this config should work but doesn't. server { listen 80; server_name mydevserver; access_log /var/log/nginx/dev.access.log; error_log /var/log/nginx/dev.error.log; location / { root /var/repos; } location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fastcgi_params2; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /var/repos; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } } Content of /etc/nginx/fastcgi_params2 fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; but config seems not working $ git clone http://mydevserver/git/myprojectname/ Cloning into myprojectname... warning: remote HEAD refers to nonexistent ref, unable to checkout. and I can request an unexistant project and I will get the same answer $ git clone http://mydevserver/git/thisprojectdoesntexist/ Cloning into thisprojectdoesntexist... warning: remote HEAD refers to nonexistent ref, unable to checkout. If I change root to /usr/lib I will get a 403 error and this will be reported to nginx error log: 2011/11/23 15:52:46 [error] 5224#0: *55 FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and SCRIPT_NAME set and is the script executable?" while reading response header from upstream, client: 198.168.0.4, server: mydevserver, request: "GET /git/myprojectname/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mydevserver" My main trouble is with the correct root value with this configuration. Maybe there are some permissions problems. Notes: /var/repos/ is owned by www-data and contains folders bit git bare repos. All this works perfectly using ssh. If I go with my browser to http://mydevserver/git/myproject/info/refs it is answered by git-http-backend asking me to send a command. /var/run/fcgiwrap.socket has 777 permissions.

    Read the article

  • How to Reinstalling MSSQL Server 2008 with SP1? (Windows 7)

    - by user23884
    I am using Windows 7 Ultimate x64. I had earlier installed SQL server 2008 with SP1 with Visual Studio 2008 Team System with sp1. Now that VS2010 is out I wanted to install it so I uninstalled visual studio then MSSLQ Server 2008 SP1 and then SQL Server 2008 as suggested here: h**p://mark.michaelis.net/Blog/SQLServer2008InstallNightmare.aspx But now when I try to reinstall it I am unable to get it right I am getting the ERROR: “Attempted to perform an unauthorized operation.” (Following is part of the log file): 2010-04-16 04:54:57 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: Prompting user if they want to retry this action due to the following failure: 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 04:54:57 Slp: The following is an exception stack listing the exceptions in outermost to innermost order 2010-04-16 04:54:57 Slp: Inner exceptions are being indented 2010-04-16 04:54:57 Slp: 2010-04-16 04:54:57 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Data: 2010-04-16 04:54:57 Slp: WatsonData = Microsoft SQL Server 2010-04-16 04:54:57 Slp: DisableRetry = true 2010-04-16 04:54:57 Slp: Inner exception type: System.UnauthorizedAccessException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Stack: 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.Win32.GetSecurityInfo(ResourceType resourceType, String name, SafeHandle handle, AccessControlSections accessControlSections, RawSecurityDescriptor& resultSd) 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.NativeObjectSecurity.CreateInternal(ResourceType resourceType, Boolean isContainer, String name, SafeHandle handle, AccessControlSections includeSections, Boolean createByName, ExceptionFromErrorCode exceptionFromErrorCode, Object exceptionContext) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity..ctor(ResourceType resourceType, SafeRegistryHandle handle, AccessControlSections includeSections) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity.Create(InternalRegistryKey key) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.SetSecurityDescriptor(String sddl, Boolean overwrite) 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 10:37:19 Slp: User has chosen to cancel this action 2010-04-16 10:37:19 Slp: Watson Bucket 2 Original Parameter Values 2010-04-16 10:37:19 Slp: Parameter 0 : SQL2008@RTM@ 2010-04-16 10:37:19 Slp: Parameter 2 : System.Security.AccessControl.Win32.GetSecurityInfo 2010-04-16 10:37:19 Slp: Parameter 3 : Microsoft.SqlServer.Configuration.Sco.ScoException@1211@1 2010-04-16 10:37:19 Slp: Parameter 4 : System.UnauthorizedAccessException@-2147024891 2010-04-16 10:37:19 Slp: Parameter 5 : SqlBrowserConfigAction_install_ConfigNonRC 2010-04-16 10:37:19 Slp: Parameter 7 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Parameter 8 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Final Parameter Values I have googled around for the error given error but all I could find is to regedit and reset permissions on certain reg keys but I don’t see any reg keys with access problem in the log file the log file can be download here: http://www.mediafire.com/?dznizytjznn. Please guys help me out here I am a developer and I cannot afford an OS reinstallation! Thanks in advance…

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Google Apps For Business, SSO, AD FS 2.0 and AD

    - by Dominique dutra
    We are a small company with 22 people in the office. We had a lot of problems with e-mail in the past so I decided to change over to Google Apps for Business. It is the perfect solution for us, except for one thing: I need to be able to control the access to the mailboxes. Only users inside the office, authenticated to AD, or users authenticated to our VPN can connect to gmail. From what I've read it is possible using the SSO (Single Sign On) solution provided by Google - but i am having some trouble finding consistent information about it. First of all, our infrastructure: Windows Server 2008 R2 Active Directory, one domain only. Kerio Control for QoS and VPN. That's about it on our side. On Google Apps' side, I have one account, and 03 domains that my users use to log in. The main domain has most of the users, but the are a couple of people that login using one of the subdomains. I have a 03 domains because I run mail for 03 companies and wanted all to be in within the same control panel. Well, I found some guides on the internet but none of them cover the AD FS installation part. I've read somewhere that I needed to download AD FS 2.0 directly from Microsoft.com, because the one that came with Windows Server was a old version. I downloaded it (adfsSetup.exe) and tried to install but got an error, saying that I needed a Windows Server 2008 Sp2 for that program. My Windows Server 2008 is R2. I really need some help here, this is very importand, I dont want to have to pay $1000 for a SSO solution when i have an AD set up. Can someone please point me out to the right direction? Where can I find an AD FS 2.0 setup compatible with R2 would be a good start, or the one that came with r2 is already the 2.0 version. After the initial setup, there are some guides on the internet about the Google Apps part. It seems to be really easy. I also tried adding AD FS role, but there are a bunch of options wich I have no idea what means, and I coudn't find any guide covering that on the internet. I dont have a lot of experience with Windows Server, but I have a company wich is certificated and provide us with support. I can ask for their help in the later setup, but I dont think ADFS is a very common thing to deal with.

    Read the article

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • How do I reset/update my BIOS for Optiplex GX280?

    - by Sam Langlhey
    So far this has been a nightmare for me, which has been frustrating me constantly. I am using Dell Optiplex GX280 with Windows XP home edition, which is running a BIOS version A04. Recently, i've rebooted the pc to find out that its not booting. It will get to the Windows boot up screen with the progress bar but only to restart to the same process again, over and over. Frustrated that I am, i've inserted the Windows recovery CD to at least either repair of reinstall the operating system to find out that was not possible. I hit F8 to have the boot options, each of the boot option that I've selected gave me an error saying: "Selected boot device is not available." Right after that, I went to the BIOS setting and did a diagnostic test, which recognized all the Boot devices onboard. Now, I cannot even repair of reinstall Windows XP, because the system is not booting from none of the boot devices. The surprise is when I removed the hard-drive from the computer and loaded it on into another computer successfully; that's right, there is nothing wrong with the hard drive. After that I was totally puzzled. I found a few pointers online saying that the BIOS start-up block might be corrupted itself and I might need to flash/update the BIOS. I found the detailed instruction on how to create a Boot up disk by downloading the BIOS firmware from the manufacture's website. I did exactly as instructed below: Download the latest version or your choose version of BIOS file for your computer or motherboard from the manufacturer’s support site. Rename the downloaded file to AMIBOOT.ROM. Copy the file to a floppy disk. Insert the floppy disk to the floppy drive. Turn on the system. After I did that and powered on the PC to boot from the floppy drive, it gave me this error message: "Non-System Disk or Disk Error. Replace and Strike any key when ready." I did all that, and I kept on pressing [Ctrl]+[Home] to force it, but it did not did any satisfying result. Desperate as I am, my next attempt is to try the instruction below. Since I want to be ready, in the event it does not work, do you have any solution that you can provide? Please keep in mind that I cannot boot from any of the devices at this moment. My only hope now is to come on with a solution that will work through the Floppy drive, since that's the only drive that affected. Thank you very much for your advice and support in advance. To create a Windows startup disk, insert a floppy disk into the drive of a similarly configured, working Windows XP system, launch My Computer, right-click the floppy disk icon, and select the Format command from the context menu. When you see the Format dialog box, leave all the default settings as they are and click the Start button. Once the format operation is complete, close the Format dialog box to return to My Computer, double-click the drive C icon to access the root directory, and copy the following three files to the floppy disk: Boot.ini NTLDR Ntdetect.com

    Read the article

  • Hour-long shutdown duration "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE Ok so i left it hanging over an hour while attending to other matters, and thankfully the host cleanly restarted. I can operate the guest machines fine now. Phew. Hyper-V must have been crawling for some reason. The VMs have been observed to become slow in the past when the host has been up for a long duration (two weeks to a month), but never this slow. Would love to know what types of performance monitoring items i can observe to give a hint why this can happen. UPDATE 2012-02-13 In the months ever since, Hyper-V has stalled into this state another two times. It appears so randomly and without any error event logs to hint what is causing it enter this "drunkard" state. Just an Hyper-V management service timeout. Log Name: System Source: Service Control Manager Date: 13/2/2012 9:16:48 AM Event ID: 7043 Task Category: None Level: Error Keywords: Classic User: N/A Computer: elune Description: The Hyper-V Virtual Machine Management service did not shut down properly after receiving a preshutdown control. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7043</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2012-02-13T01:16:48.882901900Z" /> <EventRecordID>567844</EventRecordID> <Correlation /> <Execution ProcessID="764" ThreadID="8484" /> <Channel>System</Channel> <Computer>elune</Computer> <Security /> </System> <EventData> <Data Name="param1">Hyper-V Virtual Machine Management</Data> </EventData> </Event> The only means out of it is to restart the system.

    Read the article

  • Apache sends plain-text response when accessing SSL-enabled site without HTTPS

    - by animuson
    I've never encountered something such as this before. I was attempting to simply redirect the page to the HTTPS version if it determined that HTTPS was off, but instead it's displaying an HTML page rather than actually redirecting; and even odder, it's displaying it as text/plain! The VirtualHost Declaration (Sort of): ServerAdmin [email protected] DocumentRoot "/path/to/files" ServerName example.com SSLEngine On SSLCertificateFile /etc/ssh/certify/example.com.crt SSLCertificateKeyFile /etc/ssh/certify/example.com.key SSLCertificateChainFile /etc/ssh/certify/sub.class1.server.ca.pem <Directory "/path/to/files/"> AllowOverride All Options +FollowSymLinks DirectoryIndex index.php Order allow,deny Allow from all </Directory> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule .* https://example.com:6161 [R=301] The Page Output: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I've tried moving the Rewrite stuff up above the SSL stuff hoping it'd do something and nothing happens. If I view the page with via HTTPS, it displays fine like it should. It's obviously detecting that I'm trying to rewrite the path, but it's not acting. The Apache error log does not indicate anything to me that might have gone wrong. When I remove the RewriteRules: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://example.com/"><b>https://example.com/</b></a></blockquote></p> <p>Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> I get the standard "you can't do this because you're not using SSL" response, which is also provided in text/plain rather than being rendered as HTML. This would make sense, it should only work for HTTPS-enabled connections, but I still want to redirect them to the HTTPS connection when it determines that it is not enabled. Thinking I could circumvent the system: I tried adding a ErrorDocument 400 https://example.com:6161 to the config file instead of using RewriteRules, and that just gave me a new message, still no cheese. <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="https://example.com:6161">here</a>.</p> <hr> <address>Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/1.0.0e DAV/2 Server at example.com Port 443</address> </body></html> How can I force Apache to actually redirect rather than displaying a "301" page that shows HTML in plain-text format?

    Read the article

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • How to use a different Ethernet connection

    - by SteveC
    I'm running a virtual machine at home which has a VPN connection to our main office, but I also want to connect to a share on another machine at home. When I check with IPCONFIG I can see two ethernet connections ... my work VPN ... Ethernet adapter Local Area Connection* 11: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : XXX.XXX.XXX.XXX Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.XXX.XXX.XXX and home local ... Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : lan Link-local IPv6 Address . . . . . : xxxx::xxxx:xxxx:xxxx:xxxxxxx IPv4 Address. . . . . . . . . . . : 192.168.1.70 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : What's weird is when I've been working before with a plugged-in ethernet cable I've not had any problem getting to the share? I can PING the other machine, but I can't access the share at ... \\othermachine\c$ I tried 'TRACERT` but that disappears off to the work network and eventually gets back to the local other machine after a few time-outs Is there anyway to "force" the connection to stay local ? UPDATE: the VPN is AEP SSL Tunnel

    Read the article

  • WiFi signal is lost every 3 minutes

    - by Software Monkey
    For several weeks now my Android phone has been losing it's WiFi signal momentarily, typically at about 3 minute intervals (about 3 minutes, 1.5 seconds) and occasionally at some longer interval that always seems to be just over 3 minutes. This causes an interruption of several seconds while the WiFi connection is re-established and typically fails any kind of download/streaming that is happening, makes web sites "unreachable" and generally makes the phone unusable as a data device due to the frequency. The signal remains down for about a second, but the phone takes a few more seconds to reconnect to the router. This happens regardless of proximity to the router, which otherwise show a very strong signal - usually -40 to -30 dBm or better in the same room, nowhere less than 060 dBm anywhere in the house. Changing channels does not help (I've tried 1, 4, 8 and 9). Nor does turning off the router's guest access. Nor does turning off the 5.0 GHz band. Monitoring the signal on my phone with WiFi analyzer, shows all WiFi signals on all channels drop to nothing when the WiFi connection is lost (there are two other networks on different channels which are strong enough to be relevant, with about 6 others constantly fading in and out). WiFi analyzer shows 3 separate signals for my router, the main 2.4 GHz, the guest 2.4 GHz and the 5.0 GHz. Using WiFi Analyzer on my wife's phone side-by-side shows no change in signal when my phone drops, nor does her phone drop. Monitoring the signal using our laptop, side-by-side likewise shows no signal loss and likewise the laptop does not lose it's WiFi connection. But, at work, the phone seems to not exhibit the same behavior, or, if it does, it's very occasional. Monitoring it all day at work I only saw the signal drop 3 or 4 times. The signal strength of the various networks there is comparatively weak. AT&T were super helpful: "Sorry, we can't help you with WiFi problems. You could try doing a factory reset on your phone". </sarcasm The router is relatively new, but has been working fine with this phone since last Dec. Phone : Motorola Atrix (about 8 months old). Router : Belkin N750 DB (F9K1103 v1 (01C)). Router Firmware: 1.00.46 (2011/10/28 6:37:11). Security : WPA/WPA2-Personal (PSK)

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • Upgrading php from php 5.3 to 5.4 .7

    - by Takingsides
    So, quickly so to speak I have noticed this topic around, I have searched and there are plenty of solutions. However these solutions do not work for me, not only that but I'm intending to learn more about the Debian based OS. Questions I would like to know how to upgrade php5.3 to php 5.4.7 compiling it from source, myself without using a third-party ppa. Is the way (explained below) the correct way of configuring php5.4? I'm new to compiling from source. Set-up I run Ubuntu Server 12.04 64bit. I've currently got: PHP 5.3 MySQL-Server Apache2 Memcached The Problem So I initially installed php5.3 using apt-get. I now wish to upgrade the php 5.4 due to the advantage of traits in OOP and the struct with Arrays and all the other recent patches and such. Possible Solutions I've seen this ondrej/ppa repository, which I refuse to use, given the fact that it may work, but it's an unknown/untrusted source. ALso, i'm not learning how to administer from source, using configure, make and install accordingly. I've seen a solution compiling from source, which is essentially how I was hoping to go about it with some guidance. Conclusion So I didn't just expect to be spoon-fed, and I went out and did some manual reading and atleast started the ball rolling myself; this how far i've got. The first thing I did was su into root (to save the typing sudo all the darn time). $ sudo su The next thing I did was download the latest version of php (5.4.7) and extracted it's contents ready to configure before installing it. $ mkdir php5-new && cd !$ $ wget -O php-5.4.7.tar.bz2 http://php.net/get/php-5.4.7.tar.bz2/from/uk3.php.net/mirror $ bzip2 -d php-5.4.7.tar.bz2 $ tar xvf php-5.4.7.tar.gz $ cd php-5.4.7 $ ./configure --help Finally I decided to have a bash, I looked through the list of options and decided I needed to list ALL of the things I wanted to include in the configuration. $ ./configure --with-mysql --with-apache2 --with-libxml --with-openssl --with-zlib --with-bz2 --with-curl --with-dom --with-gd --with-imap --with-imap-ssl --with-mcrypt --with-mysqli --with-pdo-mysql --with-libxml --enable-ftp --enable-mbstring --enable-soap Finally, the results... When the configuration process had finished, it threw an error: configure: error: xml2-config not found. Please check your libxml2 installation.

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Two VHosts Use Same DocumentRoot, PHP Not Working on Second VHost

    - by thegrip
    I'm helping maintain an e-commerce site that is run on Magento. This site is an outlet for our wholesale customers. We have recently decided to open a second store to reach out to our Retail customers. We decided to set up another website inside of our magento store so that we can share the products across both stores. I'm in the process of setting up this new site on the server, but have run into an issue. I've set up the second vhost for the new retail site, and I've made the DocumentRoot for this vhost the same as for the wholesale site, so we can use one magento application for both sites. This is where the error occurs. When I browse to the new store it triggers a download of the index.php file. So I know the DocumentRoot directive is working, but it seems like PHP is being broken in the process. I'm using plesk to manage the server. I've made sure that PHP is turned on in both vhosts and still get the same issue. Does this sound like a problem of PHP breaking, or is it possible my vhost.conf file is set up incorrectly? (Although the vhost is managed by plesk and appears correct) Any help will be much appreciated. EDIT: Here's the vhost configs (generated by plesk): <VirtualHost IPADDRESS:80> ServerName domain.com:80 ServerAdmin "[email protected]" DocumentRoot /var/www/vhosts/domain.com/subdomains/tk/httpdocs CustomLog /var/www/vhosts/domain.com/statistics/logs/access_log plesklog ErrorLog /var/www/vhosts/domain.com/statistics/logs/error_log <IfModule mod_ssl.c> SSLEngine off </IfModule> <Directory /var/www/vhosts/domain.com/subdomains/tk/httpdocs> <IfModule mod_php4.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/domain.com/subdomains/tk/httpdocs:/tmp" </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode off php_admin_value open_basedir "/var/www/vhosts/mkdesigngroup.com/subdomains/tk/httpdocs:/tmp" </IfModule> Options -Includes -ExecCGI </Directory> Include /var/www/vhosts/domain.com/subdomains/tk/conf/vhost.conf And here's what I've added in vhost.conf (which is included by plesk): DocumentRoot /var/www/vhosts/domain.com/subdomains/dev/httpdocs -grip

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • When upgrading from Vista to Windows 7 on a DELL laptop, how do I know which drivers to reinstall an

    - by msorens
    According to Dell's upgrade page for Vista to Windows 7, after using the upgrade assistant the final step is to install drivers. They refer to this page for the order of driver installation, listing 9 items. From there I go to the Dell Drivers and Downloads page, enter my system tag, and get a list of the downloads available for my specific box. That page, by the way, has a link to driver install instructions that lists 10 rather than 9 items. Going to Drivers Help in the side panel and clicking on "In what order should drivers be installed?" shows yet a third list, this one containing 13 items. Not surprisingly, the order of these 3 lists of drivers are not quite the same for the common items! Furthermore, of the 26 files Dell's site recommends for my machine, there are several not shown on any of the 3 lists! I can make determinations for some of these: 6 of them are "applications" so I know which of those I want and that they could probably be safely installed after all drivers. BIOS: I would think this should be unaffected by an OS upgrade so could be skipped. Two tools in the diagnostics category: could probably be done after all drivers. That leaves just a CD/DVD driver and a webcam driver unaccounted for. So my two related questions are these: How critical is the driver installation order and which one do I follow? (Keep in mind this is for an upgrade, not a fresh install.) Where in the order do I insert the CD/DVD and the webcam drivers (if needed) ? Dell's driver download page provides (in theory) the list of all downloads relevant to my specific machine, via the service tag. But do I actually need to reinstall all of them? some? none? How does one determine this? They do label each with Recommended or Optional, so do I need to reinstall all the recommended ones? (Part of the reason for my perplexed frown is that I wonder why I would need to reinstall a CD/DVD driver since I would already be using the drive to install the OS!)

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >