Search Results

Search found 37604 results on 1505 pages for 'build script'.

Page 245/1505 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • postfix check warm me that some file differ

    - by Nicolas BADIA
    If I run postfix check on my debian squeeze server, I get this: postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_nisplus-2.11.3.so and /lib/libnss_nisplus-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_files-2.11.3.so and /lib/libnss_files-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_compat-2.11.3.so and /lib/libnss_compat-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_hesiod-2.11.3.so and /lib/libnss_hesiod-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_nis-2.11.3.so and /lib/libnss_nis-2.11.3.so differ postfix/postfix-script: warning: /var/spool/postfix/lib/libnss_dns-2.11.3.so and /lib/libnss_dns-2.11.3.so differ Somebody know a solution to fix this ?

    Read the article

  • Cygwin Python and Windows Ruby

    - by Cheezo
    I have a peculiar setup as follows: I have cygwin installed on a Windows 7 machine. I need execute a python script setup in cygwin from the windows CLI. This works fine : c:\cygwin\bin\python2.6.exe c:\cygwin\bin\python-script This python-script accesses a file: ~/.some_config_file which translates to /home/user-name when i execute it from Windows as above. So this works as expected. Now, the next step is to execute this python script from ruby(which is setup on Windows natively w/o Cygwin). When i execute the script from ruby, the ~/.some_config_file translates to /cygdrive/c/Users/user-name instead of the expected /home/user-name leading to the script failing. I understand that something in the environment, PATH etc needs to be set correctly although i cannot seem to find what exactly.

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Building Python 3.2.3 on redhat 5: missing _posixsubprocess

    - by Oz123
    I am trying to build Python3 on a RHEL 5.7 machine, I successful managed to build Python 3.2.2, with : # Install required build dependencies yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel # Fetch and extract source. Please refer to http://www.python.org/download/releases # to ensure the latest source is used. wget http://www.python.org/ftp/python/3.2/Python-3.2.tar.bz2 tar -xjf Python-3.2.tar.bz2 cd Python-3.2 # Configure the build with a prefix (install dir) of /opt/python3, compile, and install. ./configure --prefix=/opt/python3 make But I am failing (?) with Python 3.2.3: Failed to build these modules: _posixsubprocess Is this a problem that should bother me ? How do I build it? I found this patch, but it's not included in sources Python 3.2.3 I obtained from the website ... Applying this patch on my sources, didn't solve the problem ... Here is the output from stderr: ~/tmp/Python-3.2.3 $ make > build.log ldd: warning: you do not have execution permission for `/usr/local/lib/libreadline.so' /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.so when searching for -lreadline /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.a when searching for -lreadline /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c: In function '_close_open_fd_range_safe': /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: 'O_CLOEXEC' undeclared (first use in this function) /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: (Each undeclared identifier is reported only once /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: for each function it appears in.) /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz ~/tmp/Python-3.2.3 $ grep posix build.log gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ar rc libpython3.2m.a Modules/_threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/symtablemodule.o Modules/xxsubtype.o building '_posixsubprocess' extension gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -IInclude -I/home/oznahum/localroot/include -I. -I./Include -I/usr/local/include -I/home/oznahum/tmp/Python-3.2.3 -c /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c -o build/temp.linux-x86_64-3.2/home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.o _posixsubprocess

    Read the article

  • SQL SERVER – Finding Size of a Columnstore Index Using DMVs

    - by pinaldave
    Columnstore Index is one of my favorite enhancement in SQL Server 2012. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. Whereas in case of column store indexes multiple pages will contain (multiple) single columns.  Columnstore Indexes are compressed by default and occupies much lesser space than regular row store index by default. One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size.  SELECT OBJECT_SCHEMA_NAME(i.OBJECT_ID) SchemaName, OBJECT_NAME(i.OBJECT_ID ) TableName, i.name IndexName, SUM(s.used_page_count) / 128.0 IndexSizeinMB FROM sys.indexes AS i INNER JOIN sys.dm_db_partition_stats AS S ON i.OBJECT_ID = S.OBJECT_ID AND I.index_id = S.index_id WHERE  i.type_desc = 'NONCLUSTERED COLUMNSTORE' GROUP BY i.OBJECT_ID, i.name Here is my introductory article written on SQL Server Fundamentals of Columnstore Index. Create a sample columnstore index based on the script described in the earlier article. It will give the following results. Please feel free to suggest improvement to script so I can further modify it to accommodate enhancements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ColumnStore Index

    Read the article

  • debian/rules error "No rule to make target"

    - by Hairo
    i'm having some problems creating a .deb file with debuild before reading some tutorials i managed to make the file but i always get this error: make: *** No rule to make target «build». Stop. dpkg-buildpackage: failure: debian/rules build gave error exit status 2 debuild: fatal error at line 1329: dpkg-buildpackage -rfakeroot -D -us -uc -b failed Any help?? This is my debian rules file: #!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 build-stamp: configure-stamp dh_testdir touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp configure-stamp dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs $(MAKE) install DESTDIR=$(CURDIR)/debian/pycounter mkdir -p $(CURDIR)/debian/pycounter # Copy .py files cp pycounter.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/pycounter.py cp prefs.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/prefs.py # desktop copyright and others (not complete, check) cp extras-pycounter.desktop $(CURDIR)/debian/pycounter/usr/share/applications/extras-pycounter.desktop

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • When module calling gets ugly

    - by Pete
    Has this ever happened to you? You've got a suite of well designed, single-responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which "wire" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: FooInput fooIn = new FooInput(1, 2); FooOutput fooOutput = fooModule(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); BarInput barIn = new BarInput( runtimevalue, fooOutput.someOtherValue); BarOutput barOut = barModule(BarIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); FooInput fooIn = new FooInput(1, 2); BarOutput barOut = fooBarConnector(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?

    Read the article

  • Why do I need a framework?

    - by lvictorino
    First of all I know my question may sounds idiot but I am no "beginner". In fact I work as game developer for several years now and I know some things about code :) But as this is no related to game development I am a bit lost. I am writing a big script (something about GIS) in Python, using a lot of data stored in a database. I made a first version of my script, and now, I'd like to re-design the whole thing. I read some advice about using a framework (like Django) for database queries. But as my script only "SELECT" informations I was wondering about the real benefits to use a framework. It seems that it adds a lot of complexity and useless embedded features (useless for my specific script) for the benefits that it will bring. Am I wrong? EDIT: few spec of this "script". Its purpose is to get GIS data on an enormous database (if you ever worked with openstreetmap you know what I mean ~= 200Go) and to manipulate this data in order to produce nice map images. Queries are not numerous (select streets, select avenues, select waterways, select forests... and so on for a specific area) but query results may be more than 10.000 rows. I'd like to sell this script as a service, so yes it's meant to stay.

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • can't install anything ,getting error "Sub-process /usr/bin/dpkg returned an error code (1)"

    - by soum
    i am getting error whenever tring to install or update anything. "Sub-process /usr/bin/dpkg returned an error code (1)" please help me i am just stopped with my ubuntu 11.10. no installation or update. th unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argumenttriggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argumenttriggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Unity gizmos vs. referenced game objects

    - by DuckMaestro
    I'm designing a Unity script that I intend to be highly reusable and as easy as possible to setup within the editor. To this end, a number of properties of this script really need some kind of visual representation on screen. It is an unresolved question to me whether the design of the script should require references to placeholder game objects, OR just Vector3's and float's that have associated gizmos drawn for them. Normally a gizmo would be a natural choice, except that Unity gizmos are not directly manipulable (as far as I can tell). Because of this shortcoming I'm having to consider whether depending on references to placeholder game objects is a more designer-friendly approach ultimately, in spite of the extra setup required, and that it might be counter-intuitive when the placeholder game objects disappear at run-time (which my script would do). Is there a community standard or preference here in this case? Can a Unity-experienced game programmer / designer speak to which approach they feel is more intuitive or more convenient to setup, when using a 3rd party script? Or is this just splitting hairs as long as I ship an example prefab with my script?

    Read the article

  • Connect to MySQL trough command line without need root password

    - by ReynierPM
    I'm building a Bash script for some tasks. One of those tasks is create a MySQL DB from within the same bash script. What I'm doing right now is creating two vars: one for store user name and the other for store password. This is the relevant part of my script: MYSQL_USER=root MYSQL_PASS=mypass_goes_here touch /tmp/$PROY.sql && echo "CREATE DATABASE $DB_NAME;" > /tmp/script.sql mysql --user=$MYSQL_USER --password="$MYSQL_PASS" < /tmp/script.sql rm -rf /tmp/script.sql But always get a error saying access denied for user root with NO PASSWORD, what I'm doing wrong? I need to do the same for PostgreSQL, any help? Regards and thanks in advance

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • cookie not being sent when requesting JS

    - by Mala
    I host a webservice, and provide my members with a Javascript bookmarklet, which loads a JS sript from my server. However, clients must be logged in, in order to receive the JS script. This works for almost everybody. However, some users on setups (i.e. browser/OS) that are known to work for other people have the following problem: when they request the script via the javascript bookmarklet from my server, their cookie from my server does not get included with the request, and as such they are always "not authenticated". I'm making the request in the following way: var myScript = eltCreate('script'); myScript.setAttribute('src','http://myserver.com/script'); document.body.appendChild(myScript); In a fit of confused desperation, I changed the script page to simply output "My cookie has [x] elements" where [x] is count($_COOKIE). If this extremely small subset of users requests the script via the normal method, the message reads "My cookie has 0 elements". When they access the URL directly in their browser, the message reads "My cookie has 7 elements". What on earth could be going on?!

    Read the article

  • Running a cronjob

    - by Ed01
    've been puzzling over cronjobs for the last few hours. I've read documentation and examples. I understand the basics and concepts, but haven't gotten anything to work. So I would appreciate some help with this total noob dilemma. The ultimate goal is to schedule the execution of a django function every day. Before I get that far, I want to know that I can schedule any old script to run, first once, then on a regular basis. So I want to: 1) Write a simple script (perhaps a bash script) that will allow me to determine that yes, it did indeed run successfully, or that it failed. 2) schedule this script to run at the top of the hour I tried writing a bash script that simple output some text to the terminal: #!/bin/bash echo "The script ran" Then I dropped this into a .txt file MAILTO = *****.******@gmail.com 05 * * * * /home/vadmin/development/test.sh But nothing happened. I'm sure I did many things wrong. Where do I start to fix all of this?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >