Search Results

Search found 11529 results on 462 pages for 'rvalue reference'.

Page 329/462 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Centos 6.2 Fresh 'Basic Server' install networking issues

    - by RWC
    I've had a /29 provisioned on a network port for a server and am trying to at least configure the machine so I can ssh into it. It's Centos 6.2 x64 with the Basic Server install. Currently not able to ping gateway or any address for that matter. For reference: Default Interface: em2 Network ID: 66.*.*.0/29 Gateway: 66.*.*.1 Broadcast: 66.*.*.7 Please see my following configs: /etc/sysconfig/network-scripts/ifcfg-em2 DEVICE=em2 NM_CONTROLLED=yes ONBOOT=yes HWADDR=Not Important TYPE=Ethernet BOOTPROTO=none IPADDR=66.*.*.2 PREFIX=29 DNS1=8.8.8.8 DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System em2" NETMASK=255.255.255.248 USERCTL=no $: route -n Destination // Gateway // Genmask // Flags // Metric // Ref // Use // Iface 66.*.*.0 0.0.0.0 255.255.255.248 U 0 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 1003 0 em2 0.0.0.0 66.*.*.1 0.0.0.0 UG 0 0 0 em2 $: route Destination // Gateway // Genmask // Flags // Metric // Ref // Use // Iface 66.*.*.0 * 255.255.255.248 U 0 0 0 em2 link-local * 255.255.0.0 U 0 1003 0 em2 default 66.*.*.1 0.0.0.0 UG 0 0 0 em2 $: cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=excalibur.domain.com GATEWAY=66.*.*.1 Keep in mind that I cannot even currently ping the gateway which is quite confusing for me. My /etc/hosts are configured correctly with the *.2 address. I'm not concerned with getting all of the addresses on the /29 up and running yet, just one so I can at least ssh in. Thanks! Edit: Adding in ifconfig. $: ifconfig em2 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX inet addr:66.*.*.2 Bcat:66.*.*.7 Mask:255.255.255.248 inet6 addr: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5536 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2599469 (2.4 MiB) TX bytes: 748 (748.0 b) Interrupt:48 Memory:dc000000-dc012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:34 errors:0 etc etc

    Read the article

  • What to do when launchpad is down?

    - by Jon
    As I am writing this (Friday, November 8, 2013 at 9:59:18 PM EST) launchpad is down. Apparently there is a power failure (https://twitter.com/launchpadstatus/status/398980619880775680). I tried running sudo apt-get update on my Ubuntu install. However, I simply get stuck on this: Ign http://ppa.launchpad.net precise InRelease 100% [Waiting for headers] Being a Ubuntu newbie, I tried to point my sources.list file to a different source. I backed up the original sources.list and then deleted the entire file to start afresh. I then added the following lines to it: deb http://mirror.anl.gov/pub/ubuntu/ precise main deb-src http://mirror.anl.gov/pub/ubuntu/ precise main I figured that since I have a different mirror, there would be no problem updating. I was wrong. I get stuck at the same place. I have several questions: Why do I need to hit launchpad? I do not reference it in my sources.list file at all. Is this something where the mirror redirects me to launchpad? Is there a good article out there that I can read on how exactly this whole apt-get update thing works that will help me understand why it is hitting launchpad? Is there any way to get my Ubuntu to update while launchpad is down? Isn't there any redundancy for the launchpad servers?

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • Internet slowed down because of SQUID Server setup

    - by Ranjith Kumar
    Recently I have setup a squid server for our office. I have computer (A) with two ethernet cards, one for internet and the second one for local networkIt has Ubuntu server OS with squid-server and dhcp3-server installedI have added few iptable rules to work like a router and redirect all http traffic to 3128 port This link is my reference. Everything worked fine for 2 days. All of a sudden internet speed went down drastically. When I connected the internet cable to my laptop to test the internet speed it was fine. Again when I reconnected it back to computer A everything was normal. This happened 4 times in a week. Could anyone here please help me why the internet speed is going down and it becomes normal when I reconnect the cable. EDIT: Rebooting the system (computer A) didn't make a difference. I have changed iptables so that http traffic doesn't redirect to 3128 port any further, still no change in the internet speed. I think the problem is not with squid but with something else. Here are my iptable rules SQUID_SERVER="10.1.1.1" INTERNET="eth1" LAN_IN="eth0" SQUID_PORT="3128" PROXYSERVERS=(Atlanta Baltimore Boston Chicago Dallas Denver Houston KansasCity LosAngeles Miami NewYork Philadelphia Phoenix SanAntonio SanDiego SanJose Seattle Washington) SERVERLEN=${#PROXYSERVERS[*]} I=0 iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X modprobe ip_conntrack modprobe ip_conntrack_ftp echo 1 /proc/sys/net/ipv4/ip_forward iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT while [ $I -lt $SERVERLEN ]; do iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d ${PROXYSERVERS[$I]}.wonderproxy.com --dport 80 -j ACCEPT let I++ done iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT iptables -A INPUT --protocol tcp --dport 80 -j ACCEPT iptables -A INPUT --protocol tcp --dport 443 -j ACCEPT iptables -A INPUT --protocol tcp --dport 22 -j ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP

    Read the article

  • Shadow copy referencing invalid volume from symboliclink

    - by ccook
    I recently replaced my motherboard after the last one failed (was shorting and causing random reboots). I'm sure this was not healthy for the machine, and that a clean install would do wonders, but I'd like to fix the current install. That aside, I've been tracking down a pair of errors in the application log. Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::QueryVolumesSupportedForSnapshots(ProviderId,29,...) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. ]. Operation: Query volumes supported by this provider Context: Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5} Snapshot Context: 29 Followed by Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\'. hr = 0x80070020, The process cannot access the file because it is being used by another process. This error is reproducible at command line, creating the two event log entries C:\Windows\system32>vssadmin list volumes vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Error: The shadow copy provider had an unexpected error while trying to process the specified command. Using WinObj from Sysinternals, I have tracked down the global object. '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\' - SymbolicLink - '\Device\HarddiskVolume8' Running DISKPART, and running the command "list volume" within it lists volumes 0 through 6, there is not a HarddiskVolume8. How can I remove this reference to HarddiskVolume8, and get shadow copy up and running?

    Read the article

  • How do I properly configure a ZipInstaller .zic file?

    - by Iszi Rory or Isznti
    As of version 1.20, ZipInstaller is supposed to support the use of a configuration file to customize its installation options. Generally, all the options I want to use are available through the dialog so I really haven't bothered with the configuration file until now. The problem now is that certain tools, such as PsTools from Sysinternals, do not properly show their Product Name to ZipInstaller. ZipInstaller's dialog will let you customize the Start Menu folder and Program Files folder, but that still doesn't change the Product Name that it sees for the software. So, instead of having "PsTools" in my Add/Remove Programs, I get "Sysinternals Software". For some things, the situation is even more confusing. For example, the NIST SP 800-53 Reference Database Application gets installed as "FileMaker Pro Runtime". To rectify this, I've tried to use the aforementioned .zic configuration file. As I understand it, it's a basic INI file you create and put in the root of the ZIP file. ZipInstaller is supposed to read that file, and adjust its parameters accordingly. Mine looks like this: [install] ProductName=NIST_SP_800-53 ProductVersion=1.4.1 CompanyName=NIST Description=NIST_SP_800-53 InstallFolder=%zi.ProgramFiles%\%zi.ProductName% StartMenuFolder=%zi.CompanyName%\%zi.ProductName% I've named it `~zipinst~.zic and placed it in the root of the ZIP file, but when I run ZipInstaller it doesn't seem to recognize any of the information I've given it in the .zic file. What might I be doing wrong here?

    Read the article

  • Issues with creating a snapshot

    - by Andy Welcomer
    Hello everyone, We have a strange issue when attempting to create a snapshot in one of our regional environments. We have 4 VMs, 2 of them have mulitple VMDKs spread onto different datastores. When a snapshot is created, all the VMDKs (except for the first), seem to vanish. If you look at the properties of the VM, the path to the VMDKS points to the datastore where the primary VMDK is, and the file name is some random garbage. If the snapshot is deleted everything returns to normal. Has anyone ever seen this? I'm using ESX3.5 Thank you in advance. Andy ==============UPDATE============== Here is some more information. I just created a test machine with 7 VMDKs. 1 for the OS. and 6 others for data. All of the VMDKs are in seperate datastores. I take a snapshot of the machine, all of the 6 VMDKs loose their reference to the actual VMDK files. The all point to 64KB VMDK files in the datastore where the OS VMDK is located. These 64KB vmdks didn't exist until the snapshot was taken. When the snapshot is deleted, everything goes back to normal.

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • Installing/enabling PHP Pecl Intl extension on CentOs 5

    - by Marijn Huizendveld
    Original question: I'm having trouble installing the PHP Pecl Intl extension on my CentOs 5 machine. After installing both icu and libicu with the following commands: $ yum install icu $ yum install libicu I tried to install the Intl extension like so: $ /usr/bin/pecl install intl I selected to search for the default location for the ICU libraries and header files. It ends up crashing like this: checking whether to enable internationalization support... yes, shared checking for icu-config... no checking for location of ICU headers and libraries... not found configure: error: Unable to detect ICU prefix or no failed. Please verify ICU install prefix and make sure icu-config works. ERROR: `/tmp/pear/temp/intl/configure --with-icu-dir=DEFAULT' failed update After successfully installing the development version of icu as suggested by RusAlex (thanks RusAlex) like so: $ yum install libicu-devel I ran into a new problem which I also encountered locally the following command: $ /usr/bin/pecl install intl now produces this error: /private/tmp/pear/temp/intl/collator/collator_class.c:92: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:96: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:101: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:107: error: duplicate 'static' make: *** [collator/collator_class.lo] Error 1 ERROR: `make' failed It appears to have something to do with PHP 5.3 being bundled with Intl already. But how can I enable this extension, if I look in my PHP Info than I cannot find any reference to it...

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • No LPT port in Windows 7 virtual machines

    - by KeyboardMonkey
    Windows 7 has MS virtual PC integrated, the VM settings don't give a parallel LPT port mapping to the physical machine. Where did it go? Has anyone else noticed this, and found a solution? Update: After much digging, I found the one and only reference to this issue, on the VPC Blog: "Parallel port devices are not supported, as they are relatively rare today." -More details- It's a XP VM I've been using since VPC 2007 days, which did have this functionality. This is to configure barcode printers via the LPT port. Since the (new) MS VM can't map to my physical LPT port, I'm having a hard time configuring printers. My physical ports are enabled in the BIOS. It has worked the past 3 years, before switching to Win 7. Any help is appreciated. This screen shot of the VM settings shows COM ports, but LPT is no more In contrast, here is a screen shot of VPC 2007 (before it got integrated into Win 7). Notice how it has LPT support

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • multiple ssh aliases is selecting wrong user when forwarding

    - by Chris Beck
    I'm following the dual identity procedure for bitbucket: I have 2 bitbucket accounts ccmcbeck and chrisbeck. The former is personal, the latter is work. On my local Mac, I have this in my ~/.ssh/config Host *.work.com User chris ForwardAgent yes IdentityFile ~/.ssh/work_dsa Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa On my local Mac I ssh -T all is good, I get: $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as chrisbeck. On my local Mac, the ssh version is OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 When I ssh foo.work.com to my Linux box, I get: $ ssh-add -l 1024 ... /Users/chris/.ssh/work_dsa (DSA) 2048 ... /Users/chris/.ssh/bitbucket_ccmcbeck_rsa (RSA) 2048 ... /Users/chris/.ssh/bitbucket_chrisbeck_rsa (RSA) On foo.work.com, I also have this in my ~/.ssh/config Host bitbucket-personal HostName bitbucket.org User ccmcbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_ccmcbeck_rsa Host bitbucket-work HostName bitbucket.org User chrisbeck ForwardAgent no IdentityFile ~/.ssh/bitbucket_chrisbeck_rsa However, on foo.work.com when I ssh -T, it references the wrong User for git@bitbucket-work $ ssh -T git@bitbucket-personal logged in as ccmcbeck. $ ssh -T git@bitbucket-work logged in as ccmcbeck. On foo.work.com, the ssh version is OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 Why is my configuration causing foo.work.com to reference the wrong User?

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • eXist-db: can't start webstart client on a closed port, reverse proxied via apache

    - by rvdb
    I am configuring an Apache HTTP server so it reverse proxies requests starting with /app/ to an eXist-db instance running in a Tomcat server, on port 8082. This port has been closed in the firewall and is inaccessible to the outer world. Following the eXist documentation, I have following rules in place in my httpd.conf file: ProxyPass /apps/ http://localhost:8082/ ProxyPassReverse /apps/ http://localhost:8082/ ProxyPassReverseCookiePath /apps/ / All goes well for requests to e.g. 'http://mydomain/apps/exist/index.xml'. Yet, the webstart client (accessible at 'http://localhost:8082/exist/webstart/exist.jnlp' on the web server) doesn't work behind the proxy. While 'http://mydomain/apps/exist/webstart/exist.jnlp' does generate a valid exist.jnlp file, that file can't be executed. The reason seems quite obvious: apparently, the eXist-db instance generating the exist.jnlp file only sees the proxied request as: 'http://localhost:8082/exist/webstart/exist.jnlp'. Yet, since the exist.jnlp file is executed on the client, that reference is meaningless (unless the client computer happens to have an eXist-db instance running on that port). Executing the exist.jnlp file hence fails with a 'connection refused' error. Yet, there's no problem at all connecting a local eXist-db Java client to the proxied eXist instance with the URL xmldb:exist://mydomain/apps/exist/xmlrpc. The problem lies in generating the webstart exist.jnlp file, which seems to need access to a publicly accessible URL. However, opening port 8082 and replacing the Proxy references to 'http://localhost:8082' with 'http://mydomain:8082' IMO rather destroys the point of reverse proxying. Do others have had success reverse proxying eXist-db on a closed port behind Apache? Are there perhaps some Proxy configuration settings I have overlooked (I'm no expert at all) that can make eXist see the original request instead of the proxied one? Kind regards, Ron

    Read the article

  • Migrating from tomcat to tc server - receiving java.sql.SQLException on startup

    - by user470184
    I'm receiving below error when I start tcServer. I do not receive this error on standalone version of tomcat. Is there extra config I need to add for tcServer ? WARNING: Unexpected exception resolving reference java.sql.SQLException: Io exception: The Network Adapter could not establish the connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:441) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801) at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:277) at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182) at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:699) at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:631) at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:485) at org.apache.tomcat.jdbc.pool.ConnectionPool.(ConnectionPool.java:143) at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:116) at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:103) at org.apache.tomcat.jdbc.pool.DataSourceFactory.createDataSource(DataSourceFactory.java:539) at org.apache.tomcat.jdbc.pool.DataSourceFactory.getObjectInstance(DataSourceFactory.java:237) at org.apache.naming.factory.ResourceFactory.getObjectInstance(ResourceFactory.java:140) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at org.apache.naming.NamingContext.lookup(NamingContext.java:793) at org.apache.naming.NamingContext.lookup(NamingContext.java:140) at org.apache.naming.NamingContext.lookup(NamingContext.java:781) at org.apache.naming.NamingContext.lookup(NamingContext.java:153) at org.apache.catalina.core.NamingContextListener.addResource(NamingContextListener.java:1028) at org.apache.catalina.core.NamingContextListener.createNamingContext(NamingContextListener.java:637) at org.apache.catalina.core.NamingContextListener.lifecycleEvent(NamingContextListener.java:238) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.StandardServer.start(StandardServer.java:747) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Mac OS X - configuring ntpd server with on LAN with D-Link DIR-655

    - by Mark C
    Hey all, This question is pretty specific, but I hope someone will have seen this error elsewhere. I a configuring a machine running OS X 10.5.8 to be an NTP server for machines connected to a LAN that is not connected to the Internet. I am not too worried about knowing the "right" time on all the machines, but rather worried about making sure everyone has the same notion of time. I configured the NTP daemon on Mac by turning on the Set date and time automatically in System Preferences, using the server's clock, 127.127.1.0 as the reference clock. I figured I should see if the server can NTP query itself before proceeding to the clients. The weird part is when I run the ntpq -p command in a command-prompt when connected to my D-Link DIR-655 (firmware: 1.33), it hangs for about a minute or so each time before finally giving me some output. I thought the problem might have to do with Port Forwarding, so I configured the router to forward port 123 for the IP of the server, but that did not improve the situation. When I run the ntpq -p command on my school's network, on a Linksys WRT54G router, or with the wireless Airport card turned off - I have absolutely no problems - the command returns a response instantly. Is this normal? I can see why a query might take a minute or so, but I don't understand why one router does it faster than the other. I tried messing around with the ntp.conf file adding the burst, minpoll, and maxpoll options: server 127.127.1.0 burst minpoll 4 maxpoll 5 Figuring that perhaps I am polling too often and the configuration file is slowing me down, but even with this, the ntpq still hangs on the D-Link DIR-655, but does just fine on the other routers. Any thoughts on where the lag is coming from or if the lag is even a problem?

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • What does this strange network/subnet mask mean?

    - by dunxd
    I'm configuring a new ASA 5505 for deployment as a VPN endpoint in a remote office. After configuring it and connecting the VPN, I get the following messages: WARNING: Pool (10.6.89.200) overlap with existing pool. ERROR: IP address,mask <10.10.0.0,93.137.70.9> doesn't pair 10.6.89.200 is the address I configured for the ASA. It has the subnet mask 255.255.255.0. The ip address 10.10.0.0 corresponds to one of our subnets, but it certainly wouldn't have a subnet mask of 93.137.70.9. That looks more like a public IP address (and resolves to an ADSL connection somewhere). I am sure if we had such a subnet configured, that it would indeed overlap with 10.6.89.200. There is no reference to 93.137.70.9 in the config of this ASA or our head office ASA. Can anyone shed light on what is going on here? The sudden appearance of a strange subnet mask is a bit alarming.

    Read the article

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • Windows PE network setup

    - by microchasm
    I'm walking through the following step-by-step guide for deploying Windows 7 via AIK: http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx On step 4 (Capturing the Installation onto a Network Share), I run into a bit of a snag: attempting to connect to a network drive repeatedly fails. I'm using/deploying Dell Optiplex 380 64 bit machines, and the network cards seem to be really wonky. On the machine that I'm using to run AIK etc, the network driver wasn't found automatically. I had to manually go in and install the driver (which was found on the OEM installation media). I've since copied this to the USB key that I'm using for the Autounattend.xml so its handy. I think that because of this, the PE environment doesn't or can't instantiate the network device. Is there a way to install/configure the network device through the command prompt in PE? If not, I read about adding in the answer file path(s) to drivers, but if I did it this way, would I have to start the process all over again (i.e. create new Autounattend.xml with the PnPcustomizations path included, re-run the installation on the reference machine, install all the applications, re-make the PE iso, reboot into new PE iso)? Any shortcuts, direction, or advice would be appreciated. Thanks!

    Read the article

  • LDAP: Extend database using referral

    - by ecapstone
    My company uses an off-site LDAP server to handle authentication. I'm currently working on a local VPN for my branch that needs to use the off-site LDAP to check user's usernames and passwords, but I don't want every employee to have access to the VPN - I need to be able to control whether users can authenticate with the off-site LDAP based on whether they're allowed to use the VPN. My current solution involves having our own local LDAP server, which has a referral to the off-site server (I got most of my information from here: http://www.zytrax.com/books/ldap/ch7/referrals.html). This means that when local users try to check their credentials with the local server, it redirects them to the off-site server, which checks the credentials. This works for authentication, but not for authorization. It would be easiest to add a vpn_users group or is_vpn_user attribute on the off-site server, but, well, that's above my pay grade. Is there any way I can use the local server to control whether users have access to the VPN without needing to change the off-site server? If I could somehow use it to have a local vpn_users group without the users in it having to be located on the local server, that would probably work, but I have no idea how to set that up or if LDAP even supports such a configuration. For reference, I'm using the openvpn-auth-ldap (https://code.google.com/p/openvpn-auth-ldap/) plugin.

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >