Search Results

Search found 4783 results on 192 pages for 'bash completion'.

Page 167/192 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • linux + create host file from CSV file by sed or awk or perl

    - by yael
    I have the following CSV file this file defined which Linux machine exist in the system and there ip's my target is to create host file from this file please advice how to create host file as example 1 from my CSV file ( I need to match the IP address from CSV file and put it on the first field of the host file , then match the LINUX name and locate this name in the sec field – as example 1 ) remark - should be performed by sed or awk or perl .. , I need to write the solution in my bash script CSV file , machine , VM-LINUX1 , SZ , Phy , 10.213.158.18 , PROXY , VM-LINUX2 , SZ , 10.213.158.19 , OLD HW , VM-LINUX3 , SZ , 10.213.158.20 , , VM-LINUX4 , SZ , Phy , 10.213.158.21 , , VM-LINUX5 , SZ , Phy , OUT , EXT , LAN3 , 10.213.158.22 , INTERNAL , VM-LINUX6 , SZ , Phy , 10.213.158.23 , , server , new HW , VM-LINUX7 , SZ , Phy , 10.213.158.24 , OUT, LAN3 , VM-LINUX8 , SZ , 10.213.158.25 , OLD HW , machine , VM-LINUX9 , SZ , Phy , INT , 10.213.158.26 , LAN2, AN45, , VM-LINUX10 , SZ , Phy , 10.213.158.27 , , VM-LINUX11 , SZ , Phy , LAN5 , 10.213.158.28 , example 1 ( host file ) 10.213.158.18 VM-LINUX1 10.213.158.19 VM-LINUX2 10.213.158.20 VM-LINUX3 10.213.158.21 VM-LINUX4 10.213.158.22 VM-LINUX5 10.213.158.23 VM-LINUX6 10.213.158.24 VM-LINUX7 10.213.158.25 VM-LINUX8 10.213.158.26 VM-LINUX9 10.213.158.27 VM-LINUX10 10.213.158.25 VM-MACHINE8 10.213.158.26 STAR9 10.213.158.27 TOP10 10.213.158.28 SERVER11

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Why are Microsoft Windows Update taking so long to install?

    - by Mathieu Pagé
    Hi, I have a question that is not related to a problem I have. Just something I'd like to understand. Why are Windows update so long? First Windows Update need to find witch updates you needs and this take about 5 minutes. What is happening behind the scene during those 5 minutes? I would have tought that it would be enough to compare the updates you already have to the complete list of updates or to check the version numbers of a couples files. Then when it comes time to install the upgrades, they're also taking a long time. Some 1 Mb updates takes 2, 3 or 5 minutes to install. What is taking so long. I would have though that it was simply a mater of backup the old file, uncompress the new files, replace the old file. This should be really fast. Is Windows doing something else? For comparison, under Linux, you can find which updates you need in about 20 seconds and installing them is usually pretty fast (The time to uncompress the files). I can do a complete updgrade of my linux machine in about 25 minutes (download 600-800 Mb of updates, hundreds of them and install them) while under windows 25 minutes is the time it needs to find witch update are needed and install about 5-10 updates. I just updated a Windows XP home from SP1a to SP3 + all other updates. It took me more than 3 hours. Doing something like that in the Linux World takes about 30 minutes. I don't want to bash Microsoft here. I genuinly want to know what they do differently that makes it so long.

    Read the article

  • Advanced Linux file permission question (ownership change during write operation)

    - by Kent
    By default the umask is 0022: usera@cmp$ touch somefile; ls -l total 0 -rw-r--r-- 1 usera usera 0 2009-09-22 22:30 somefile The directory /home/shared/ is meant for shared files and should be owned by root and the shared group. Files created here by usern (any user) are automatically owned by the shared group. There is a cron-job taking care of changing owning user and owning group (of any moved files) once per day: usera@cmp$ cat /etc/cron.daily/sharedscript #!/bin/bash chown -R root:shared /home/shared/ chmod -R 770 /home/shared/ I was writing a really large file to the shared directory. It had me (usera) as owning user and the shared group as group owner. During the write operation the cron job was run, and I still had no problem completing the write process. You see. I thought this would happen: I am writing the file. The file permissions and ownership data for the file looks like this: -rw-r--r-- usera shared The cron job kicks in! The chown line is processed and now the file is owned by the root user and the shared group. As the owning group only has read access to the file I get a file write error! Boom! End of story. Why did the operation succeed? A link to some kind of reference documentation to back up the reason would be very welcome (as I could use it to study more details).

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Error when starting qpidd as a service

    - by Sparks
    I have recently swapped from CENTOS 5 to FEDORA 17. Previously I have created my own init.d scripts successfully (albeit not for qpidd) however, in FEDORA I cannot get it to work. I have created the following script (called qpidd) in the init.d directory: #!/bin/bash # # /etc/rc.d/init.d/qpidd # # QPID/AMQP Broker scripts # # # chkconfig: 2345 20 80 # description: QPID/AMQP Broker service # processname: qpidd # pidfile: /var/lock/subsys/qpidd # Source function library. . /etc/init.d/functions SERVICENAME=qpidd start() { echo -n "Starting $SERVICENAME: " daemon qpidd -d & retval=$? touch /var/lock/subsys/$SERVICENAME return $retval } stop() { echo -n "Shutting down $SERVICENAME: " qpidd -q & retval=$? rm -f /var/lock/subsys/$SERVICENAME return $retval } case "$1" in start) start ;; stop) stop ;; status) status qpidd ;; restart) stop start ;; condrestart) [ -f /var/lock/subsys/<service> ] && restart || : ;; *) echo "Usage: $SERVICENAME {start|stop|status|restart" exit 1 ;; esac exit $? After this, I ran chkconfig --add qpidd, however, now when I run sudo service qpidd start I get the following message: Starting qpidd (via systemctl): Job failed. See system journal and 'systemctl status' for details. If I then run systemctl status qpidd I get the following message: Failed to issue method call: Unit name qpidd is not valid. I am now lost, I have search the web and Stack Overflow but cannot find anybody with similar problem, any help or direction to a website that can help would be much appreciated Sparks :)

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • Optimize shell and awk script

    - by bryan
    I am using a combination of a shell script, awk script and a find command to perform multiple text replacements in hundreds of files. The files sizes vary between a few hundred bytes and 20 kbytes. I am looking for a way to speed up this script. I am using cygwin. The shell script - #!/bin/bash if [ $# = 0 ]; then echo "Argument expected" exit 1 fi while [ $# -ge 1 ] do if [ ! -f $1 ]; then echo "No such file as $1" exit 1 fi awk -f ~/scripts/parse.awk $1 > ${1}.$$ if [ $? != 0 ]; then echo "Something went wrong with the script" rm ${1}.$$ exit 1 fi mv ${1}.$$ $1 shift done The awk script (simplified) - #! /usr/bin/awk -f /HHH.Web/{ if ( index($0,"Email") == 0) { sub(/HHH.Web/,"HHH.Web.Email"); } printf("%s\r\n",$0); next; } The command line find . -type f | xargs ~/scripts/run_parser.sh

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Cannot connect to Solaris Server when Oracle GoldenGate process uses the port

    - by Abdallah Ghrb
    I'm trying to test the Oracle goldengate to replicate data between two databases on the same server. So I installed two databases and two goldengate homes in the same machine. The goldengate processes are started from each home and they are responsible for the replication : Process from home 1 configured on port 7809 & process for home 2 configured on port 7810. For a successful replication, processes started from goldengate home 1 should communicate with processes started from goldengate home 2. But for some reasons, this is not happening. The goldengate log file has the following error : OGG-01223 Oracle GoldenGate Capture for Oracle, exthrr.prm: TCP/IP error 131 (Connection reset by peer). "Googling" for this error, it said that the connection occurred but the host terminated it. Tried to telnet the machine with the used port and it gave the following error: bash-3.00$ telnet 10.10.3.124 7810 Trying 10.10.3.124... Connected to 10.10.3.124. Escape character is '^]'. Connection to 10.10.3.124 closed by foreign host. Here the communication occurs but for only around 3 seconds then it is closed by the host, which is the same explanation of the above error in the goldengate log file. I tried to change the port but still the same error. The problem is happening only when the goldengate process is using the port. When other process is using the same port, I can telnet successfully.

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Login with Enterprise Principal Name using sssd AD backend in Ubuntu 14.04 LTS

    - by Vinícius Ferrão
    I’m running sssd version 1.11 with the AD backend in Ubuntu 14.04 LTS (1.11.5-1ubuntu3) to authenticate users from Active Directory running on Windows Server 2012 R2, and I’m trying to achieve logins with the User Principal Name for all users of the domain. But the UPN are always Enterprise Principal Names. Let-me illustrate the problem with my user account: Domain: local.example.com sAMAccountName: ferrao UPN: [email protected] (there’s no local in the UPN) I can successfully login with the sAMAccountName atribute, which is fine, but I can’t login with [email protected] which is my UPN. The optimum solution for me is to allow logins from sAMAccountName and the UPN (User Principal Name). If’s not possible, the UPN should be the right way instead of the sAMAccountName. Another annoyance is the homedir pattern with those options in sssd.conf: default_shell = /bin/bash fallback_homedir = /home/%d/%u What I would like to achieve is separated home directories from the EPN. For example: /home/example.com/user /home/whatever.example.com/user But with this pattern I can’t map the way I would like to do. I’ve looked through man pages and was unable to find any answers for this issues. Thanks,

    Read the article

  • Columbus Regional Airport Authority Cuts Unbudgeted Carryover Costs for Capital Projects by 88% in One Year

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Columbus Regional Airport Authority (CRAA) is a public entity that works to connect Central Ohio with the world. It oversees operations at three airports?Port Columbus International Airport, Rickenbacker International Airport, and Bolton Field Airport?and manages the Rickenbacker Inland Port and Foreign Trade Zone # 138. It was created in 2002 through the merger of the Columbus Airport Authority and Rickenbacker Port Authority. CRAA manages approximately 100 projects annually, including initiatives as diverse as road and runway construction and maintenance, terminal improvements, construction of a new air traffic control tower, technology infrastructure development, customer service projects, and energy conservation programs. CRAA deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to create a unified methodology for scheduling and capital cash flow management. Today, the organization manages schedules and costs for all of its capital projects by using Primavera to provide enterprise wide visibility. As a result, CRAA cut unbudgeted carryover costs from US$24.4 million in 2010 to US$3.5 million in 2011?an 88% improvement. "Oracle’s Primavera P6 and Primavera Contract Management are transforming project management at CRAA. We have enabled resource-loaded scheduling and expanded visibility into cash flow, which allowed us to reduce unbudgeted carryover by 88% in a single year.” – Alex Beaver, Manager, Project Controls Office, Columbus Regional Airport Authority Challenges Standardize project planning and management for the approximately 100 projects?including airport terminal upgrades to road and runway creation and rehabilitation?that the airport authority undertakes annually Improve control over project scheduling and budgets to reduce unplanned carryover costs from one fiscal year to the next Ensure on-time, on-budget completion of critical infrastructure projects that support the organization’s mission to connect Central Ohio with the world through its three airports and inland port Solutions · Used Primavera P6 Enterprise Project Portfolio Management to develop a unified methodology for scheduling and managing capital projects for the airport authority, including the organization’s largest capital project ever?a five-year runway construction project · Gained a single, consolidated view into the organization’s capital projects and the ability to drill down into resource-loaded schedules and cash flow, enabling CRAA to take action earlier to avert the impact of emerging issues?including budget overages and project delays · Cut unbudgeted carryover costs from US$24.4 million in 2010 to US$3.5 million in 2011?an 88% improvement Click here to view all of the solutions. “Oracle’s Primavera solutions are the industry standard for project management. They provide robust and proven functionality that give us the power to effectively schedule and manage budgets for a wide range of projects, from terminal maintenance, to runway work, to golf course redesign,” said Alex Beaver, manager, project controls office, Columbus Regional Airport Authority. Click here to read the full version of the customer success story.

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

  • init.d script runs correctly but process doesn't live when booted fully up

    - by thetrompf
    I have a problem with an init.d script #!/bin/bash ES_HOME="/var/es/current" PID=$(ps ax | grep elasticsearch | grep $ES_HOME | grep -v grep | awk '{print $1}') #echo $PID #exit 0 case "$1" in start) if [ -z "$PID" ]; then echo "Starting Elasticsearch" echo "Starting Elasticsearch" /var/tmp/elasticsearch su -m elasticsearch -c "${ES_HOME}/bin/elasticsearch" exit 0; else echo "Elasticsearch already running" echo "Elasticsearch already running" /var/tmp/elasticsearch exit 0; fi ;; stop) if [ -n "$PID" ]; then echo "Stopping Elasticsearch" kill ${PID} echo "Stopped Elasticsearch" exit 0; else echo "Elasticsearch is not running" exit 0; fi ;; esac The scripts runs just file, as I can see in /var/tmp/elasticsearch a new line is added after every boot, but if I run: /etc/init.d/elasticsearch stop Just after the server is booted, I get "Elasticsearch is not running", ergo somehow the process does not stay alive. My question is why? and what am I doing wrong? Thanks in advance.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >