Search Results

Search found 35354 results on 1415 pages for 'joe even'.

Page 544/1415 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Rails3 environment running very slow on Windows XP, Ubuntu 9.04, Ubuntu 9.10

    - by bergyman
    I've tried all three (granted the Ubuntu versions were via VirtualBox with XP as a host, but I gave the images all the available RAM my system has). Loading the rails environment is taking 30-60 seconds. rails console, rake test:units - anything that requires rails to load up. And not just on the first go - every time. I've even used autotest to see if it helps with execution time for unit tests, but it doesn't. Any time I change one test, it still takes 30 seconds to load them, and then about 4 seconds to execute. Has anyone else come across this issue? Has anyone figured out any way to fix this?

    Read the article

  • opening socket to google hangs on SYN_SENT

    - by puchu
    I have 2 computers now: downloader (asus at4nm10t-i) with debian and desktop (asus sabertooth 990fx) with gentoo in the same network under NAT even with the same ethernet card: RTL8111E. driver r8169 is compiled as module on both computers. Sometimes in evenings desktop cannot connect to google and all its services like now: curl -v http://www.google.by on downloader it received server's answer immediately. on desktop it hanged and when I ran in other terminal: netstat -ntp | grep curl >>tcp 0 1 192.168.0.7:54126 173.194.35.191:80 SYN_SENT 4876/curl after 1-2 minutes it received server's answer. I was tried to change ip of network, mac address of desktop but nothing changed. When I was trying to connect to another services except google: curl -v http://www.yahoo.com both computers received answers immediately! Only when I rebooted desktop it begins to work with google services correctly I cant understand what is this bug related to. In which bugtracker should I post this: r8169 or linux kernel or google? PS. Desktop was checked with memtest: 5 passes - no errors

    Read the article

  • Most efficient way to connect an ISAPI Dll to a windows service

    - by Mike Trader
    I am writing a custom server for a client. They want scalability so I must use a thread pool and probably I/O completion port to regulate it. The main requirement is that a windows service manage the HTTP requests for a number of reasons. An example of one would be that a client session spans many requests and continuity must be maintained. Another would be that the ISAPI Dll will be in the IIS address space and so it's code will be lean and very carefully implemented. The extensive processing in the Windows service may get unruly for the duration of the lengthy development. If the service crashes it will not take out IIS. Anyway, the remaining decision is how to have these two processes communicate. We have talked about pipes, tcp, global memory and even a single pipe with multiplexed data ala FastCGI. Would love to hear anyones experience with a decision like this.

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections

    - by Hammer Bro.
    Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Error while installing ltsp server package in fedora 12

    - by paragjain16
    Hi, i am using fedora 12, while i was installing ltsp(Linux terminal server project) server package, it told me that some more packages need to be installed with it as well, while downloading the packages i got the following error - Local Conflict between packages Test Transaction Errors: file /usr/share/man/man5/dhcp-eval.5.gz from install of dhcp-12:4.1.1-5.fc12.i686 conflicts with file from package dhclient-12:4.1.0p1-12.fc12.i686 file /usr/share/man/man5/dhcp-options.5.gz from install of dhcp-12:4.1.1-5.fc12.i686 conflicts with file from package dhclient-12:4.1.0p1-12.fc12.i686 i also deleted all the dhcp packages from man5 directory, even then it is giving the same error msg. please help me with it

    Read the article

  • How does DNS "get stuck"?

    - by Muhammad Mussnoon
    I recently registered a domain and got hosting from Dreamhost. But when even after three days, the website was not accessible, I contacted support about it. This is the response the support person gave me: "My apologies! The DNS had gotten stuck, so I went ahead and pushed that through for you. Please allow 2-3 hours for the DNS to propagate." Now I have to say that my knowledge regarding these things is virtually zero, and I couldn't understand what the support person meant, so I ran a search and it seemed that Mr. Google knew just as much as I did regarding this. Can someone tell me what "dns getting stuck" means?

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • failed to enable x11 forwarding

    - by Hunt
    I am trying to enable X11 forwarding on my server which is running on FreeBSD 7.1. I have a putty installed in my windows in which i have enabled X11 forwarding by checking on Enable X11 forwarding and specifying following parameter X display location localhost:0 after that i run putty and checked whether X11 is enabled or not by typing following command echo "$DISPLAY" or echo $DISPLAY but i am getting following error DISPLAY: Undefined variable. Even i have installed XManager but in that also i am getting following error The X11 forwarding request was rejected ! To solve this problem, please turn on the X11 forwarding features of the remote SSH server can anyone suggest me how to get rid off this ?

    Read the article

  • Linode - Centos 5.5 -

    - by Marcus West
    Hi, I rather foolishly undertook to install a control panel on a Linode. I opted to use CentOs 5.5 (either ordinary or 64 bit) but I am like a monkey playing a reward game... I have some idea of what I am doing, but not enough.... In certain areas I am hopeless....do I install Webmin/virtualmin, or ISP Config..... ISP Config 2 or 3? I would employ someone to help, but how do i find the right person? Where can i learn the ropes on all this? There seems to be no systematic training, and even when I try to research college courses in the UK, I am none the wiser as to where I could go to learn how to run a Linux server..... Has anyone any pointers? Right now I am looking at th esecurity aspects of the server.....rkhunter , denyhosts etc... Any advice on installing and maintaining these things? Cheers marcus

    Read the article

  • Cannot connect to Xen domU via VNC if X isn't installed on domU

    - by Hai Minh Nguyen
    I'm trying to build a Xen domU that can be connected through the Xen's VNC server. Below is the template (actually it's generated by OpenNebula): name = 'one-153' #O CPU_CREDITS = 256 memory = '128' bootloader = "/usr/bin/pygrub" disk = ['tap:aio:/home/oneadmin/cloud/one/var/153/images/disk.0,xvda,w',] vif = ['mac=02:00:c0:a8:00:03,bridge=virbr0',] vfb = ['type=vnc,vnclisten=slave1,vncdisplay=1,vncpasswd=v98KXdFN'] The problem is that I can't connect to the domU if it doesn't have X. In this case all I got is a blank screen. Besides, if the domU has X, the screen is still blank until the login prompt appears, while it should be like this. Some information that may be useful: The domU and the dom0 both run CentOS 5.5. If the domU has X, it can be connected even when both X and the domU's VNC server isn't running. The VNC client is RealVNC.

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • hosts file ignored, how to troubleshoot?

    - by Superbest
    The hosts file on Windows computers is used to bind certain name strings to specific IP addresses to override other name resolution methods. Often, one decides to change the hosts file, and discovers that the changes refuse to take effect, or that even old entries of the hosts file are ignored thereafter. A number of "gotcha" mistakes can cause this, and it can be frustrating to figure out which one. When faced with the problem of Windows ignoring a hosts file, what is a comprehensive troubleshoot protocol that may be followed? This question has duplicates on SO, such as hosts file seems to be ignored, HOSTS file being ignored, /etc/hosts file being ignored as well as numerous discussions elsewhere. However, these tend to deal with a specific case, and once whatever mistake the OP made is found out, the discussion is over. If you don't happen to have made the same error, such a discussion isn't very useful. So I thought it would be more helpful to have a general protocol for resolving all hosts-related issues that would cover all cases.

    Read the article

  • Troubleshooting Amazon EC2 reboot

    - by tgm
    We've had a server (CentOS) running in EC2 for a few months. It had been going pretty smoothly until today when we got an alarm that the server was unavailable (HTTP service couldn't be reached). So I tried SSHing into the box but that timed out as well. I logged into the EC2 console and it said the instance was running and there wasn't anything in the system log. One odd thing I noticed is that even though we have an Elastic IP attached to it (which shows in the Elastic IP management area), the instance detail is not showing that there is an EIP associated with the instance. I looked through the message log and the last thing I see around the time we got our alert was the dhclient renewed the lease. I'm guessing there may have been some sort of issue with the networking. How might I check if that was the problem, or if there were any other issues that may have caused our instance to stop responding?

    Read the article

  • Sound card / microphone impedance mismatch

    - by axk
    First of all I'm not completely sure this is impedance mismatch, but from what I found on the Internet I believe it is. It seems to be a common problem. The question is not as much about solving the problem as about why it is happening (if I'm right about the cause of the problem, of course). I had this quiet microphone problem with several built in cards and microphones and now with a Creative Audigy SE. There's a microphone boost option which introduces a lot of noise with volume increase, but even this doesn't seem to give loud enough sound in some cases. The mic on my current headphones is very quiet with Audigy SE without the boost but is very loud and low noise with an external Sound Blaster Connect. So the question is have I just been unlucky with my sound cards and microphones or is it a common problem? And if it is a common problem why is it so difficult for the vendors to standardize on the sound card / microphone impedance? Edit: the OS is Windows (XP/7), but I don't believe it is OS-specific.

    Read the article

  • How do I run a non-service program as a service on Windows 2008 Server?

    - by Lasse V. Karlsen
    I found this page that tells me how to set up Windows Live Sync as a background service on Windows 2003 Server, unfortunately the resource kit tools for 2003 that are mentioned does not work on 2008 server. http://mswhs.freeforums.org/windows-live-sync-as-a-service-on-whs-t623.html Also, apparently there is no resource kit tools downloadable for Windows 2008 Server that I can find. Perhaps someone has a link to the relevant tools? (INSTSRV.EXE and SRVANY.EXE.) Using just plain SC.EXE doesn't work as I assume that the program is then required to be a normal service, and not just any executable. What other options do I have? Can I use the task scheduler on 2008 server to start the WindowsLiveSync executable, will that work? I need the executable to stay running even after I've logged off from the server.

    Read the article

  • SFTP jail & Keeping file ownership the same / File owner per folder

    - by Dragonshadow
    I want to setup a jailed SFTP account for a subfolder of another user's home folder, but want the owner of everything in that subfolder to stay the same, including new files and folders uploaded and created by the sftp user, while still allowing access to the files and folders of that subfolder as if the SFTP user was the parent user. rawny bawb-sftp /home/rawny <- rawny owns this /home/rawny/sftp <- rawny owns this too, but bawb-sftp can upload to it, edit files, etc bawb-sftp uploads a file /home/rawny/sftp/lol.txt rawny should still own the file, as if he made it in the first place, even though bawb-sftp was the one that uploaded it. Basically I guess I'm asking for an sftp jail that acts as a highly limited passthrough/puppet for another user?

    Read the article

  • NLB NIC w/ Static IP permanently "Acquiring Network Address" after adding more IP's to the cluster

    - by I.T. Support
    We have a 2 node NLB cluster. I recently added more IP's to the cluster range, after which point, the NLB NIC's on both of my nodes ended up in a permanent "acquiring network address" state. The cluster appears to be functioning despite this hang in acquiring an address. We've also rebooted the machines, and these NIC's remain in this state "acquiring network address" even after reboot, which I find troubling. Questions: Does anyone know where I can find documentation on how NLB binds to a NIC? (are there dll's involved, drivers, etc.) Does anyone know of any programs that have known incompatability issues with NLB? We have an application server installed on these NLB nodes (AOL Server) that interacts with the NIC's as well, and I suspect that it might be causing this issue. Before I go directly to Microsoft, I want to gather as much information as possible about the issue. Any help / guidance would be much appreciated...

    Read the article

  • Dolby Digital Live (DDL) on Asus Rampage II Gene (Creative X-Fi Extreme)

    - by kevyn
    Hi there, I have an Asus Rampage II Gene motherboard which has X-Fi extreme built in. I can get it to work with Windows 7 ok using the Creative drivers, however when I try and install the DDL/DTS add on pack from Creative I get the error message: "There are no supported audio device available. You need to close the application. Click OK to close the application now" I don't understand it because I have the Creative software installed ok and supporting the sound without any problems. In Device manager the audio device comes up as 'High definition audio device' and uses driver: 6.1.7600.16385 from Microsoft. I tried using the Creative drivers which show up as 'soundmax HD audio' however these do not allow any of the Creative products to run properly. Please can anyone offer any help? Or even just confirm that DDL can work with my onboard sound? Windows 7 Ultimate 64 bit 6GB DDR3 XFX GS8800 384mb Asus Rampage II Gene Intel i7 920 (2.66)

    Read the article

  • How to start using Wget?

    - by brilliant
    Please, forgive me for asking this question. Usually I would try to learn thisngs myself first before bothering others, but my situation is urgent - if I don't act now and don't download all my family pictures from this website, it will be closed in about two weeks from now and I will loose all of them. So, please help me. I have already asked a question about how to download from my website all my pictures automatically here and I was advised to use Wget. But I don't know the very basics as to how to use it. The basics are not explained on that page. In that questioned that I asked I was even given this line: wget -r -A .jpg,.gif,.png but I don't know what I am supposed to do with it in order to get the Wget going and download all the pictures from my website. Please, help me somebody!

    Read the article

  • ISA forms authentication problems after installing moss sp2

    - by user22215
    Guys I have a problem that's flared back up after installing WSS and MOSS service pack 2. The problem centers around the users being prompted to enter credentials when interacting with office documents. This problem came up before and I was able to go into ISA server and configure a persistent cookie on the web listener. As we all know when configuring a cookie you have two options use only on private computers or use on all computers. If I select use on all computers I can't even log in to Sharepoint from the forms page however if I select use only on private computer I'm able to log in and also I don't get prompted when opening office documents. So I would like to ask has something changed with Sharepoint service pack 2 because that’s the only change that’s been made to my environment.

    Read the article

  • Enable SMB file sharing on OS X - "Incorrect Password"

    - by Tim Robinson
    I have a Mac running Snow Leopard connected to an Active Directory domain. I can share folders on the Mac and view files from Windows without problems. When I try to enable my Mac account for write access through System Preferences, I'm prompted for my account's password. Even though I'm entering the right one, I get an "Incorrect Password" response. The same process works fine for the local Mac administrator account; it's the Active Directory account I'm having problems with. I followed the advice on this page on apple.com without success: (I used the Mac to reset my domain password, and re-created my login keychain) If you want to use a user account that existed before you installed Mac OS X 10.3 (Panther), you may need to reset the password for the account using Accounts preferences. Can anyone suggest what might be wrong? Until I fix this I can't write to my Mac file share from Windows.

    Read the article

  • Recommended SpamAssassin update channels?

    - by Timo Geusch
    I'm currently using SpamAssassin on a couple of mail servers that I look after. SpamAssassin runs in the context of amavisd-new on those servers and with the usual bunch of plugins (FuzzyOCR, DCC, pyzor, razor). Currently the servers are getting their rule updates from the default SpamAssassin update channel (updates.spamassassin.org). Overall the setup seems to be reasonably effective but some types of spam seem to wander right through it even though I've made repeated attempts at training spamassassin. My guesstimate is that about 85%-90% of the spam that gets through policyd-weight makes it through the filters and it's been getting a lot worse recently as spammers are getting better at working their way through filters. Can someone recommend additional sources of filters to make SpamAssassin more effective? So far I've found OpenProtect's update channel but are there others worth looking at?

    Read the article

  • Can't start httpd 2.4.9 with self-signed SSL certificate

    - by Smollet
    I cannot start the httpd 2.4.9 (tried 2.4.x too) on CentOS 6.5 with the simplest SSL config possible. The openssl version installed on the machine is OpenSSL 1.0.1e-fips 11 Feb 2013 (I've upgraded it using 'yum update' to the latest patched version as well) I have compiled and installed the httpd 2.4.9 using the following commands: ./configure --enable-ssl --with-ssl=/usr/local/ssl/ --enable-proxy=shared --enable-proxy_wstunnel=shared --with-apr=apr-1.5.1/ --with-apr-util=apr-util-1.5.3/ make make install Now I'm generating the default self-signed certificate as described in the CentOS HowTo: openssl genrsa -out ca.key 2048 openssl req -new -key ca.key -out ca.csr openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt cp ca.crt /etc/pki/tls/certs cp ca.key /etc/pki/tls/private/ca.key cp ca.csr /etc/pki/tls/private/ca.csr Here is my httpd-ssl.conf file: Listen 443 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 300 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/usr/local/apache2/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-5]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog "/usr/local/apache2/logs/ssl_request_log" \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> when I start httpd using bin/apachectl -k start I get following errors in the error_log: Wed Jun 04 00:29:27.995654 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:29:27.995726 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.995863 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996111 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:29:27.996122 2014] [ssl:info] [pid 24021:tid 139640404293376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:29:27.996209 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.996280 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996295 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:29:27.996303 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: DH PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996308 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: EC PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996318 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:29:27.996321 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I then try to generate missing DH PARAMETERS and EC PARAMETERS: openssl dhparam -outform PEM -out dhparam.pem 2048 openssl ecparam -out ec_param.pem -name prime256v1 cat dhparam.pem ec_param.pem >> /etc/pki/tls/certs/ca.crt And it mitigates the error but the next comes out: [Wed Jun 04 00:34:05.021438 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:34:05.021487 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.021874 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022050 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:34:05.022066 2014] [ssl:info] [pid 24089:tid 140719371077376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:34:05.022285 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1016): AH02540: Custom DH parameters (2048 bits) for 192.168.9.128:443 loaded from /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022389 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1030): AH02541: ECDH curve prime256v1 for 192.168.9.128:443 specified in /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022397 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.022464 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022478 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:34:05.022488 2014] [ssl:emerg] [pid 24089:tid 140719371077376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:34:05.022491 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I have tried to generate the simple certificate/key pair exactly as described in the httpd docs Unfortunately, I still get exact same errors as above. I've seen a bug report with the similar issue: https://issues.apache.org/bugzilla/show_bug.cgi?id=56410 But the openssl version I have is reported as working there. I've also tried to apply the patch from the report as well as build the latest 2.4.x branch with no success, I get the same errors as above. I have also tried to create a short chain of certificates and set the root CA certificate using SSLCertificateChainFile directive. That didn't help either, I get exact same errors as above. I'm not interested in setting up hardened security, etc. The only thing I need is to start httpd with the simplest SSL config possible to continue testing proxy config for the mod_proxy_wstunnel Had anybody encountered and solved this issue? Is my sequence for creating a self-signed certificate incorrect? I'd appreciate any help very much!

    Read the article

  • Unable to see External HDD in Windows Explorer

    - by Jamie Keeling
    I have bought a 320GB External HDD which I want to use with my Playstation3. I know it will only work with a Fat32 file system so using some free HP software it formatted it, unbeknown to me it will only work up to 32GB. After seeing this I panicked, downloaded Partition Wizard Home Edition and deleted the partition. As I was about to create a new partition to put it back to NTFS (I'd just wanted to be able to use it in the first place at this point) I accidently knocked the cable out of my computer for the HDD and after replacing it the External HDD is no longer recognised by the My Computer option, Disk Management asks me to initialise the disc using MBR but it fails saying "Copy protected". Even the partitioning software I previously mentioned can't do anything about it, all it says is "Bad Condition" and I can't perform any operations on it. Would anybody be able to guide me in getting this sorted? I'm terrified i've wasted a perfectly good 320GB HDD.

    Read the article

  • Can I auto-forward Gmail to multiple addresses?

    - by Yuval
    Hi, I have set up a Gmail account with the intention of having it function as a mail hub, meaning it should forward every mail item to all of its contact list, in this case my friends and me. I created a simple filter for all incoming items (from:*), but the 'Forward it to' field will only allow me a single email address, and not even a contact group. Is there a way around this? And why is such a simple feature not available in Gmail? I thought they were out of beta...

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >