Search Results

Search found 21778 results on 872 pages for 'stewart may'.

Page 174/872 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 1200

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • Postfix not sending email after upgrading to Ubuntu 12.04

    - by Luke
    After upgrading a server from Ubuntu 10.04 to 12.04, postfix is no longer sending email through sendgrid.com. I followed this guide about 6 months ago and everything had been working perfectly until the upgrade. Now it doesn't seem to be authenticating with sendgrid. This is the error I get in my syslog when I try to send an email. May 22 10:19:55 server postfix/smtp[3844]: 983B11C5DA: to=<to address>, relay=smtp.sendgrid.net[174.36.32.204]:587, delay=0.05, delays=0.01/0/0.04/0, dsn=5.0.0, status=bounced (host smtp.sendgrid.net[174.36.32.204] said: 550 Cannot receive from specified address <sendgrid username>: Unauthenticated senders not allowed (in reply to MAIL FROM command)) This is from postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = no config_directory = /etc/postfix header_size_limit = 4096000 inet_interfaces = loopback-only mailbox_size_limit = 0 mydestination = localhost, mylinode.members.linode.com myhostname = hostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relayhost = [smtp.sendgrid.net]:587 smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl/sendgrid smtp_sasl_security_options = noanonymous smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Any help would be greatly appreciated. I would be happy to post any other logs or other relevant information.

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

  • Redundant Microsoft server solution for small company

    - by MadBoy
    I'm planning to change one server Microsoft SBS 2003 with SharePoint, Exchange and SQL database into something that will provide me with some redundancy and won't be single point of failure. I was thinking to buy 2x exactly the same physical servers and put 2 virtualized servers on HyperV or VMWare on each. Then i would put SharePoint, Exchange and SQL on that 1 physical server (shared onto 2x VM's). I would like 2nd physical server to be exact duplicate of the first one so that when 1st server goes down (for reboot or hw failure), 2nd takes care of everything so that users don't even see anything changed (in terms all their emails, sharepoint stuff is available). My questions are: Will I have to pay for licenses for both servers even thou only one instance of SharePoint, Exchange, SQL will be used at same time? What are proposed solutions to do that? Any additional hardware I would need, any complicated software configuration to be expected to configure such redundancy so that when one physical server goes down 2nd one is taking care of rest? What problems should I expect? This solution is for 60 people. Later on it may or may expand.

    Read the article

  • What service do you use for music on hold?

    - by Russ Warren
    This may not be a sysadmin question for some, but it is definitely a hurdle I have to jump as the sysadmin for my company. We recently rolled-out a company wide VoiP system (Switchvox, to be exact) that has come preloaded with some royalty-free music on hold. Our customers have been complaining that the music on hold sounds like "funeral music." This may be the case (although I wouldn't want it played at my funeral), but it is all we have and we aren't willing to be sued over using music that isn't properly licensed. So, that brings me to the question asked in the title -- what and/or how do you provide decent music on hold? I'm assuming many people here use a PBX that allows customized music, so this has to apply to many of you. We've been looking at some sites that allow you to download royalty-free music for a one-time fee, but the music seems...lame. Something like a one-year subscription from ibaudio.com seems to be the best bet so far. Have you been able to discover something a little more mainstream for a decent licensing fee? Thank you. EDIT: Our PBX allows the playback of MP3 and OGG files, but does not allow streaming of a live audio source, Internet-based or otherwise. It also does not allow the use of a "line-in" source such as a CD player or radio. Don't let this stop you from sharing your setup, though. I'm interested in hearing what everyone uses!

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Setting up SSL for phpMyAdmin

    - by Ubuntu User
    I would like to run phpmyadmin using my SSL certificate. I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does... $cfg['ForceSSL'] =true; However, my issue is when I did this, now I get an error stating "cannot connect to server." I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this? I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire. Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ of course I am now trying to get to https://mydomain.com/phpmyadmin/ however, I do not have any pages on my website that requires https:// currently. In the future I may add this. But for now, I only want to access phpmyadmin via ssl. I can use my own -- but if this causes problems with future ecommerce apps under mydomain.com I would rather use the SSL cert I already purchased. Thank you!

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Google Chrome not loading web pages correctly unless multiple refreshes

    - by Brandon Wilson
    Webpages in Google Chrome do not load correctly from time to time. I can't reproduce it, it just happens. Some times it happens when I load the browser other times it happens when I am just browsing. Just now I went to five different web sites which 3 out of 5 of them did not load correctly. I have attached a photo of how Super User loaded the first time I loaded it. If I refreshed it it will load correctly. Facebook is bad like this. Some times Facebook will load correctly but some of there back end scripting may not load so the page may not refresh automatically. Not sure what is going on. I have tried other browsers (Firefox and Internet Explorer) and they seem to be working correctly. Chrome seems to be acting up only on this computer. All my computers are running Windows 8 and I have removed Chrome completely off this computer and re-installed. I even disabled all extensions and cleared all the caches. I even tried running Chrome without being logged in. Not sure what else to do at this point. An example of superuser.com not loading correctly: When I refresh the problem will go away until it happens again. Sometimes it takes two or three refreshes in order for it to correctly load.

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • Apache2 process stuck at 100% cpu, CLOSE_WAIT socket lingering

    - by mmazing
    I've troubleshooted the heck out of this today, and I can't seem to find any information on how to determine what is happening exactly. Basically, on my development server, another developer is causing CLOSE_WAIT connections that eat up one or more apache2 processes for several hours if I don't restart apache2. strace on any of the processes yields no information, only that it was able to attach. mod_proxy is not enabled. KeepAlive is on, KeepAliveTimeout is 15 seconds, MaxKeepAliveRequests is 100. From what I've been reading, this may or may not be an apache issue at all, just that that's how CLOSE_WAIT works (the server is waiting for a FIN packet to close the connection). I just can't believe that a server would be crippled so easily by not receiving a packet from a remote host telling it to close the connection. Especially without any intervention for well over an hour. Any tips? I'm about to pull my hair out. Edit : Also, there are no unusual entries in any apache log files. Edit 2: lsof -i shows only a single CLOSE_WAIT per hanging process. (That's what has been bothering me about this, as most other discussions talk about many CLOSE_WAIT connections, while I only have one per process.) The nature of the code that is running (php) doesn't really lend itself to closing open connections and whatnot. I can run the same code that he is executing with the same session data, and not result in a hanging process.

    Read the article

  • Apache2 random 403 error & info server busy logs on Ubuntu

    - by risyasin
    Hello, I have a strange situation with apache2. Meanless, random 403 errors. Any page (html, php etc.) normally working. but if i request repeatedly by pressing refresh button of browser. it interrupts & sends a 403 randomly. after a few seconds it works again. in the error log, i see client denied by server configuration. main error log of apache says [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 8 children, there are 99 idle, and 137 total children my current values IfModule mpm_prefork_module StartServers 120 MinSpareServers 100 MaxSpareServers 200 MaxClients 256 MaxRequestsPerChild 500 /IfModule i've increased 10 by 10. from 20. but nothing solved. i've disabled KeepAlive. What may cause this problem ? thank you in advance. a fresh install Ubuntu server x86 8.04.4 Virtualmin from it's website (not from debian repositories). Linux 2.6.24-27-server #1 SMP i686 - Apache 2.2.8 Mpm prefork Virtualmin version 3.78.gpl GPL PHP Version 5.2.4-2ubuntu5.10 Loaded modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module shared) alias_module (shared) auth_basic_module (shared) auth_digest_module (shared) uthn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) uthz_host_module (shared) authz_user_module (shared) autoindex_module (shared) ache_module shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) expires_module (shared) fcgid_module (shared) file_cache_module (shared) eaders_module (shared) mime_module (shared) mime_magic_module (shared) evasive20_module shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) etenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK

    Read the article

  • update from debian lenny to squeeze

    - by Daniel
    I'm trying to update from debian lenny to squeeze on my 64bit root server and did the following so far: modifying sources.list apt-get update apt-get upgrade apt-get install linux-image-2.6-amd64 The last step leads to the following error-output: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-2.6-amd64: Depends: linux-image-2.6.32-5-amd64 but it is not going to be installed E: Broken packages UPDATE: here's my sources.list deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free How can I fix that safely? thx

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 120

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • Browser history, by window

    - by Alex
    I use Chrome --but can switch to another browser if the feature I am about to describe is available. Browsing history, AFAIK in Chrome is sorted exclusively chronologically. Very often however, I will be working on a particular task on my laptop and have multiple (read: a lot) of tabs open in a single Chrome window for that task. Before finishing that task, I may need to work on something else --so I will open another window and minimize the other one, and start researching an entirely different issue. Over the course of a day, I may end up with 10-15 windows with many tabs each. This raises two issues: (a) memory usage and (b) quickly switching between the most relevant two or three windows. I solve these two problems like any regular guy probably would --closing windows. I want to be able to reopen specific windows that I have closed, such that the tabs that were open in that window at the time the window was closed will reopen. Ideally, closed windows will be sorted by the time they were closed and identified by the tabs that were open (even more ideally, I would be able to name these windows (contemporaneously or in the history menu)). Now that I describe this, what I am asking is: does any browser offer the ability to "save and close" windows? (This is distinct from an option to auto-restore tabs upon reopening the browser) Thank you.

    Read the article

  • Mysql server won't start - no logs

    - by Owen
    After a restart, mysql won't start. sudo service mysql start gives start: Job failed to start and the logs are empty, so I have no idea where to start. I'm pretty sure permissions problems are taken care of. Edit: All disks have at least 1G of space and sh -x /etc/init.d/mysql start gives me: + set -e + basename /etc/init.d/mysql + INITSCRIPT=mysql + JOB=mysql + [ mysql = upstart-job ] + [ -z start ] + COMMAND=start + shift + [ -z ] + ECHO=echo + echo Rather than invoking init scripts through /etc/init.d, use the service(8) Rather than invoking init scripts through /etc/init.d, use the service(8) + echo utility, e.g. service mysql start utility, e.g. service mysql start + echo + echo Since the script you are attempting to invoke has been converted to an Since the script you are attempting to invoke has been converted to an + echo Upstart job, you may also use the start(8) utility, e.g. start mysql Upstart job, you may also use the start(8) utility, e.g. start mysql + grep -q start/ + status mysql + [ -z ] + [ start = stop ] + [ -n ] + start mysql start: Rejected send message, 1 matched rules; type="method_call", sender=":1.105" (uid=1000 pid=3208 comm="start mysql ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")

    Read the article

  • On Windows and Windows 7's Task Manager, why Memory is 1118MB Available but only 62MB Free? [closed]

    - by Jian Lin
    Possible Duplicate: Windows 7 memory usage What are the "Cached", "Available", and "Free" memory in the following picture (From Windows 7's Task Manager). If it is 1118MB Available, then why isn't it Free (to use)? As I understand it, if a bowl of noodle is available, that doesn't mean it is free... it may still cost $7. But what about in the Task Manager, when it is Available, it is also not Free? Does it cost $2 per MB? What about the "Cached"... What exactly is the Cached Memory? We may put some hard disk data in RAM and so we cache the data in RAM, for faster access (that's the operating system's job). So the Total Physical RAM is 6GB, what is the 1106 Cached? Cached in where? Caching physical RAM in ... some where? It is also strange that the Cached value is sometimes higher and sometimes lower than the Available value. Can somebody who is knowledgeable about this shred some light on these meanings?

    Read the article

  • Uninstall IIS on Windows 7

    - by CJM
    I've just rebuilt my development machine and installed IIS. I then installed the Web Deployment tool and used this to restore my previously-backed-up websites to the clean machine. Unfortunately the restoration didn't work correctly/fully. I couldn't easily correct the problem, so I decided to uninstall/reinstall IIS and recreate the sites manually. I uninstalled IIS and rebooted, but there was still plenty of stuff left around such as various files in /windows/system32/inetsrv/ which I tried to delete manually (with limited success!). I rebooted again and tried to reinstall IIS - it reported an error (no meaningful message) and requested another reboot. The event log includes the following errors: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://*:80/gallery for site 1. The site has been disabled. and Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. I'd like to avoid another rebuild. Can I completely remove IIS, such that I can reinstall it from scratch? Or can I 'fix' the current setup so that IIS will reinstall over what is already there?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >