Search Results

Search found 3730 results on 150 pages for 'bash'.

Page 147/150 | < Previous Page | 143 144 145 146 147 148 149 150  | Next Page >

  • PHP SASL(PECL) sasl_server_init(app) works with CLI but not with ApacheModule

    - by ZokRadonh
    I have written a simple auth script so that Webusers can type in their username and password and my PHP script verifies them by SASL. The SASL Library is initialized by php function sasl_server_init("phpfoo"). So phpfoo.conf in /etc/sasl2/ is used. phpfoo.conf: pwcheck_method: saslauthd mech_list: PLAIN LOGIN log_level: 9 So the SASL library now tries to connect to saslauthd process by socket. saslauthd command line looks like this: /usr/sbin/saslauthd -r -V -a pam -n 5 So saslauthd uses PAM to authenticate. In the php script I have created sasl connection by sasl_server_new("php", null, "myRealm"); The first argument is the servicename. So PAM uses the file /etc/pam.d/php to see for further authentication information. /etc/pam.d/php: auth required pam_mysql.so try_first_pass=0 config_file=/etc/pam.d/mysqlconf.nss account required pam_permit.so session required pam_permit.so mysqlconf.nss has all information that is needed for a useful MySQL Query to user table. All of this works perfectly when I run the script by command line. php ssasl.php But when I call the same script via webbrowser(php apache module) I get an -20 return code (SASL_NOUSER). In /var/log/messages there is May 18 15:27:12 hostname httpd2-prefork: unable to open Berkeley db /etc/sasldb2: No such file or directory I do not have anything with a Berkeley db for authentication with SASL. I think authentication using /etc/sasldb2 is the default setting. In my opinion it does not read my phpfoo.conf file. For some reason the php-apache-module ignores the parameter in sasl_server_init("phpfoo"). My first thought was that there is a permission issue. So back in shell: su -s /bin/bash wwwrun php ssasl.php "Authentication successful". - No file-permission issue. In the source of the sasl-php-extension we can find: PHP_FUNCTION(sasl_server_init) { char *name; int name_len; if (zend_parse_parameters(1 TSRMLS_CC, "s", &name, &name_len) == FAILURE) { return; } if (sasl_server_init(NULL, name) != SASL_OK) { RETURN_FALSE; } RETURN_TRUE; } This is a simple pass through of the string. Are there any differences between the PHP CLI and PHP ApacheModule version that I am not aware of? Anyway, there are some interesting log entries when I run PHP in CLI mode: May 18 15:44:48 hostname php: SQL engine 'mysql' not supported May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sqlite May 18 15:44:48 hostname php: sql_select option missing May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sql Those lines are followed by lines of saslauthd and PAM which results in authentication success.(I do not get any of them in ApacheModule mode) Looks like that he is trying auxprop pwcheck before saslauthd. I have no other .conf file in /etc/sasl2. When I change the parameter of sasl_server_init to something other then I get the same error in CLI mode as in ApacheModule mode.

    Read the article

  • Segmentation fault when creating a Phonon MediaObject

    - by Luke Hansford
    I have music playing program made using PySide which uses Phonon to playback audio. I updated to MacOS X Mavericks a few days ago, which meant I needed to reinstall PySide. I'm not sure which of these actions has caused this, but now whenever I try to create a Phonon MediaObject I get a Segmentation Fault: 11 from Python. It's not just in my program, it happens when trying to create a MediaObject in Python without any other actions. I'm getting the following error message from my Mac whenever it crashes: Process: Python [13711] Path: /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: org.python.python Version: 2.7.5 (2.7.5) Code Type: X86-64 (Native) Parent Process: bash [13707] Responsible: Terminal [13704] User ID: 501 Date/Time: 2013-11-01 19:47:53.164 +1000 OS Version: Mac OS X 10.9 (13A603) Report Version: 11 Anonymous UUID: C2686854-18CA-9D37-26E9-60050E3C4DA6 Sleep/Wake UUID: BB983BF6-CCE2-44D1-82A0-1C73382DFFE4 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008 VM Regions Near 0x8: --> __TEXT 00000001082e8000-00000001082e9000 [ 4K] r-x/rwx SM=COW /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 QtCore 0x000000010a1b34cb QObject::moveToThread(QThread*) + 17 1 QtDBus 0x000000010d55f98b QDBusDefaultConnection::QDBusDefaultConnection(QDBusConnection::BusType, char const*) + 171 2 QtDBus 0x000000010d55ebdf QDBusConnection::sessionBus() + 71 3 phonon 0x000000010d50228d Phonon::FactoryPrivate::FactoryPrivate() + 189 4 phonon 0x000000010d5024d5 Phonon::$_249::operator->() + 99 5 phonon 0x000000010d502991 Phonon::Factory::registerFrontendObject(Phonon::MediaNodePrivate*) + 17 6 phonon 0x000000010d50b27e Phonon::MediaNodePrivate::MediaNodePrivate(Phonon::MediaNodePrivate::CastId) + 80 7 phonon 0x000000010d50f570 Phonon::MediaObjectPrivate::MediaObjectPrivate() + 24 8 phonon 0x000000010d50be9d Phonon::MediaObject::MediaObject(QObject*) + 45 9 phonon.so 0x000000010d42f24a Sbk_Phonon_MediaObject_Init + 458 10 org.python.python 0x0000000108338707 type_call + 189 11 org.python.python 0x00000001082f74fd PyObject_Call + 101 12 org.python.python 0x00000001083714f0 PyEval_EvalFrameEx + 15525 13 org.python.python 0x0000000108373aaf fast_function + 182 14 org.python.python 0x0000000108370919 PyEval_EvalFrameEx + 12494 15 org.python.python 0x000000010836d721 PyEval_EvalCodeEx + 1638 16 org.python.python 0x000000010836d0b5 PyEval_EvalCode + 54 17 org.python.python 0x000000010838beb8 run_mod + 53 18 org.python.python 0x000000010838bf5f PyRun_FileExFlags + 137 19 org.python.python 0x000000010838baad PyRun_SimpleFileExFlags + 718 20 org.python.python 0x000000010839c58b Py_Main + 3039 21 libdyld.dylib 0x00007fff8e4fb5fd start + 1 Thread 1:: Dispatch queue: com.apple.libdispatch-manager 0 libsystem_kernel.dylib 0x00007fff8c938662 kevent64 + 10 1 libdispatch.dylib 0x00007fff923e743d _dispatch_mgr_invoke + 239 2 libdispatch.dylib 0x00007fff923e7152 _dispatch_mgr_thread + 52 Thread 2: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 3: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 4: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x00007feba0d19700 rbx: 0x000000010d5b7098 rcx: 0x00000000002f4180 rdx: 0x000000000012c040 rdi: 0x0000000000000000 rsi: 0x00007feba0d19700 rbp: 0x00007fff57917210 rsp: 0x00007fff579171d0 r8: 0x00007feba0fd5d10 r9: 0x00007feba0ff5310 r10: 0x0000000019c04cbe r11: 0x0000000070769b38 r12: 0x00007fff57917220 r13: 0x00007feba0c07190 r14: 0x0000000000000000 r15: 0x00007feba0fe1430 rip: 0x000000010a1b34cb rfl: 0x0000000000010202 cr2: 0x0000000000000008 Logical CPU: 0 Error Code: 0x00000004 Trap Number: 14 Anyone have any ideas about what is happening?

    Read the article

  • R Install/Update on Mac OS X (Snow Leopard): where does R put files during install/config?

    - by doug
    In sum, there's a stray preference-like file or two (probably just one) that i can't find. Here's the whole story: I recently attempted to update my R install from 2.10 to 2.11. As i have done before, i installed from source. I know that all of the dependencies are correctly installed and made available to R, because my prior install worked fine. When i upgraded to 2.11, i am unable to install packages (no exception thrown, it just doesn't appear to complete the install and is unresponsive so i have to quit + restart R. Given i install from source, there are any number of points in the process that i could have messed up. What i need to do now is "start over" which requires that i clear out my my prior install. I am having trouble doing that. There is still at least one preference file or something like that i can't find and one of these is causing the problem, so i need to find it and terminate it with extreme prejudice before i do a fresh install. Although i set a number of flags during the install, i have never opted out of the default install locations during the config step. There has to be one or more preference files still in my file structure (and that's also accessible to the new install of R) because after i follow all of the steps below, then do a fresh install, when i start R for the first time, some of my preferences have persisted (e.g., quartz settings, GUI background color, editor selection, etc.). Again, the problem is that i just cannot locate those files. Finally, the problem can't be that during my last install from source, i inadvertently caused a preference file to be sent to an off-spec location--because again, a fresh R install (whether from source or from the OS X binaries) is finding those files Here's what i've done prior to attempting a clean install of R: Removed files from these locations: ~/.RData ~/.RHistory /Applications/R64.app /Applications/R.app /Library/Frameworks/R.framework (i also removed all symlinks from these) Cleared all RAM and disk caches, in particular the directory where i know R caches: ~/Library/Caches/R* (in fact i've cleared this entire directory) Checked for all 'hidden' files in the OS X directories where login/startup files are often placed: /etc/ ~/ In addition, i've checked R-help, and i've also read through the relevant portions of 'R Installation and Administration'--no luck. I've also searched searched my file structure using the various bash utilities, which nearly always solves problems of this sort quite easily, but in this case obviously searching by name or even pattern is more problematic.

    Read the article

  • Postergres could not connect to server

    - by Gary Lai
    After I did brew update and brew upgrade, my postgres got some problem. I tried to uninstall postgres and install again, but it didn't work as well. This is the error message.(I also got this error message when I try to do rake db:migrate) $ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? How can I solve it? Mac version: Mountain lion. homebrew version: 0.9.3 postgres version: psql (PostgreSQL) 9.2.1 And this is what I did. 12:30 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.2.1... 12:31 ~/D/works$ brew uninstall postgresql Uninstalling /usr/local/Cellar/postgresql/9.1.4... 12:31 ~/D/works$ psql --version bash: /usr/local/bin/psql: No such file or directory 12:33 ~/D/works$ brew install postgresql ==> Downloading http://ftp.postgresql.org/pub/source/v9.2.1/postgresql-9.2.1.tar.bz2 Already downloaded: /Library/Caches/Homebrew/postgresql-9.2.1.tar.bz2 ...... ...... ==> Summary /usr/local/Cellar/postgresql/9.2.1: 2814 files, 38M, built in 2.7 minutes 12:37 ~/D/works$ initdb /usr/local/var/postgres -E utf8 The files belonging to this database system will be owned by user "laigary". This user must also own the server process. The database cluster will be initialized with locale "en_US.UTF-8". The default text search configuration will be set to "english". initdb: directory "/usr/local/var/postgres" exists but is not empty If you want to create a new database system, either remove or empty the directory "/usr/local/var/postgres" or run initdb with an argument other than "/usr/local/var/postgres". 12:39 ~/D/works$ mkdir -p ~/Library/LaunchAgents 12:39 ~/D/works$ cp /usr/local/Cellar/postgresql/9.2.1/homebrew.mxcl.postgresql.plist ~/Library/LaunchAgents/ 12:39 ~/D/works$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist homebrew.mxcl.postgresql: Already loaded 12:39 ~/D/works$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start server starting 12:39 ~/D/works$ env ARCHFLAGS="-arch x86_64" gem install pg Building native extensions. This could take a while... Successfully installed pg-0.14.1 1 gem installed 12:42 ~/D/works$ psql --version psql (PostgreSQL) 9.2.1 12:42 ~/D/works$ psql psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    Read the article

  • More GCC link time issues: undefined reference to main

    - by vikramtheone
    Hi Guys, I'm writing software for a Cortex-A8 processor and I have to write some ARM assembly code to access specific registers. I'm making use of the gnu compilers and related tool chains, these tools are installed on the processor board(Freescale i.MX515) with Ubuntu. I make a connection to it from my host PC(Windows) using WinSCP and the PuTTY terminal. As usual I started with a simple C project having main.c and functions.s. I compile the main.c using GCC, assemble the functions.s using as and link the generated object files using once again GCC, but I get strange errors during this process. An important finding - Meanwhile, I found out that my assembly code may have some issues because when I individually assemble it using the command as -o functions.o functions.s and try running the generated functions.o using ./functions.o command, the bash shell is failing to recognize this file as an executable(on pressing tab functions.o is not getting selected/PuTTY is not highlighting the file). Can anyone suggest whats happening here? Are there any specific options I have to send, to GCC during the linking process? The errors I see are strange and beyond my understanding, I don't understand to what the GCC is referring. I'm pasting here the contents of main.c, functions.s, the Makefile and the list of errors. Help, please!!! Vikram main.c #include <stdio.h> #include <stdlib.h> int main(void) { puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */ return EXIT_SUCCESS; } functions.s * Main program */ .equ STACK_TOP, 0x20000800 .text .global _start .syntax unified _start: .word STACK_TOP, start .type start, function start: movs r0, #10 movs r1, #0 .end Makefile all: hello hello: main.o functions.o gcc -o main.o functions.o main.o: main.c gcc -c -mcpu=cortex-a8 main.c functions.o: functions.s as -mcpu=cortex-a8 -o functions.o functions.s Errors ubuntu@ubuntu-desktop:~/Documents/Project/Others/helloworld$ make gcc -c -mcpu=cortex-a8 main.c as -mcpu=cortex-a8 -o functions.o functions.s gcc -o main.o functions.o functions.o: In function `_start': (.text+0x0): multiple definition of `_start' /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o:init.c:(.text+0x0): first defined here /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o: In function `_start': init.c:(.text+0x30): undefined reference to `main' collect2: ld returned 1 exit status make: *** [hello] Error 1

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • How to start dovecot?

    - by chudapati09
    I'm building a web server to host multiple websites. I got everything working except the mail server. I'm using linode to host my vps and I've been following their tutorials. FYI, I'm using Ubuntu 11.10. Here is the link I've been following, http://library.linode.com/email/postfix/dovecot-mysql-ubuntu-10.04-lucid. I got up to the part where it tells me to restart dovecot, so I tried "service dovecot restart". But then I get this "restart: Unknown instance:". I'm logged in as root, so I'm not using sudo. Since that didn't work I tried "/etc/init.d/dovecot restart" and I get "dovecot start/running, process 4760". So I try "/etc/init.d/dovecot status" and I get "dovecot stop/waiting". So I tried "service dovecot start" and I get "dovecot start/running, process 4781". So I tried to get the status, so I tired "service dovecot status" and got "dovecot stop/waiting" Then I tired "/etc/init.d/dovecot start" and I get "dovecot start/running, process 4794". So I tired to get the status, so I tired "/etc/init.d/dovecot status" and got "dovecot stop/waiting" Just for kicks and giggles I tired to kill the process, I used the PID that I got when I did "service dovecot start", this was the command "kill -9 4444" and I get this "bash: kill: (4805) - No such process" Am I doing something wrong? --EDIT 1-- The following are logs that were found in /var/log/syslog that involved dovecot dovecot: master: Dovecot v2.0.13 starting up (core dumps disabled) dovecot: ssl-params: Generating SSL parameters dovecot: ssl-params: SSL parameters regeneration completed dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) kernel: init: dovecot main process (10276) terminated with status 89 kernel: init: dovecot main process (10289) terminated with status 89 kernel: init: dovecot main process (10452) terminated with status 89 kernel: init: dovecot main process (2275) terminated with status 89 kernel: init: dovecot main process (3028) terminated with status 89 kernel: init: dovecot main process (3216) terminated with status 89 kernel: init: dovecot main process (3230) terminated with status 89 kernel: init: dovecot main process (3254) terminated with status 89 kernel: init: dovecot main process (3813) terminated with status 89 kernel: init: dovecot main process (3845) terminated with status 89 kernel: init: dovecot main process (4664) terminated with status 89 kernel: init: dovecot main process (4760) terminated with status 89 kernel: init: dovecot main process (4781) terminated with status 89 kernel: init: dovecot main process (4794) terminated with status 89 kernel: init: dovecot main process (4805) terminated with status 89 --Edit 2 (/etc/dovecot/dovecot.conf)-- The following is the dovecot.conf file protocols = imap imaps pop3 pop3s log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/home/vmail/%d/%n/Maildir ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem namespace private { separator = . prefix = INBOX. inbox = yes } protocol lda { log_path = /home/vmail/dovecot-deliver.log auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@[mydomainname.com] mail_plugins = sieve global_script_path = /home/vmail/globalsieverc } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } auth default { user = root passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb static { args = uid=5000 gid=5000 home=/home/vmail/%d/%n allow_all_users=yes } socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } -- Edit 3 (/var/log/mail.log) -- The following is what is in /var/log/mail.log dovecot: master: Dovecot v2.0.13 starting up (core dumps disabled) dovecot: ssl-params: Generating SSL parameters postfix/master[9917]: daemon started -- version 2.8.5, configuration /etc/postfix dovecot: ssl-params: SSL parameters regeneration completed postfix/master[9917]: terminating on signal 15 postfix/master[10196]: daemon started -- version 2.8.5, configuration /etc/postfix dovecot: master: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: config: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: anvil: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) dovecot: log: Warning: Killed with signal 15 (by pid=1 uid=0 code=kill) postfix/master[2435]: daemon started -- version 2.8.5, configuration /etc/postfix postfix/master[2435]: terminating on signal 15 postfix/master[2965]: daemon started -- version 2.8.5, configuration /etc/postfix

    Read the article

  • Does likewise-open > version 5.4 contain CIFS support?

    - by Ben Andken
    I'm trying to get the CIFS server working in likewise-open. I've found this set of instructions and everything seems to work until I try to connect ([url]http://www.likewise.com/resources/documentation_library/manuals/cifs/likewise-cifs-smb-file-server-guide.html#id2765992):[/url] 1.6. Build and Configure a Standalone Likewise-CIFS Server This section demonstrates how to build and configure a standalone instance of Likewise-CIFS from the command line. The following procedure assumes that you want to set up Likewise-CIFS on a Linux server to share files with Windows computers in a network without Active Directory. This procedure also assumes you know how to build Linux applications from their source code and then install them. Download Likewise-CIFS from its open source git location: $ git clone git://git.likewiseopen.org/ Download, build, and install the following tools. The tools listed are known to work, but earlier or later versions might work as well. Also, instead of downloading the tools, you might be able to install them on your platform with apt-get or some other means. http://ftp.gnu.org/gnu/autoconf/autoconf-2.65.tar.gz http://ftp.gnu.org/gnu/automake/automake-1.9.6.tar.gz http://ftp.gnu.org/gnu/libtool/libtool-2.2.6a.tar.gz http://pkgconfig.freedesktop.org/releases/pkg-config-0.23.tar.gz gcc --version 3.x or greater Build Likewise-CIFS: $ cd likewise-open $ build/mkcomp --debug all Install Likewise-CIFS: $ sudo su $ cd staging/install-root $ tar cf - . | (cd / && tar xvf -) Make sure Samba is not running: $ /etc/init.d/smb stop Make sure SELinux is either disabled or set to permissive. Make sure the ports required by Likewise are open. For a list of ports that Likewise uses, see the Likewise Open Installation and Administration Guide. Configure Likewise Open: $ /etc/init.d/lwsmd start $ for i in /etc/likewise/*.reg; do /opt/likewise/bin/lwregshell upgrade $i; done $ /etc/init.d/lwsmd stop $ /etc/init.d/lwsmd start $ /opt/likewise/bin/lwsm start srvsvc $ /opt/likewise/bin/domainjoin-cli configure --enable nsswitch Add a user account to the local Likewise provider database. In the following example, substitute the account name that you want for newuser. $ /opt/likewise/bin/lw-add-user --home /home/newuser --shell /bin/bash newuser Successfully added user newuser Enable the user and set the password: $ /opt/likewise/bin/lw-mod-user --enable-user --set-password newuser New Password: ********** Successfully modified user newuser Look up new user's identity as follows. Substitute the value from the command hostname -s for the hostname. Keep in mind that Likewise truncates a hostname longer than 15 characters to the first 15 characters of the string. % id hostname\\newuser uid=2000(HOSTNAME\newuser) gid=1800(HOSTNAME\Likewise Users) groups=1800(HOSTNAME\Likewise Users) context=system_u:system_r:unconfined_t:s0 Make a CIFS directory for the user: mkdir /lwcifs/newuser chown 2000:1800 /lwcifs/newuser From a Windows computer, map the Likewise-CIFS drive share: Computer->Map Network Drive... Folder: \\IP_hostname\c$ Click "Finish" Username: hostname\newuser Password: user_password The last step fails when I try to connect. I've tried with Windows XP Pro and Windows 7 Pro. The rest of the directions only appear to work for version 5.4 (the one that shipped with 10.04). For 12.04, version 6.1 is the only one available and it doesn't appear to have the srvsvc module mentioned in these instructions. Is CIFS support dropped in the 6.1 version of likewise-open?

    Read the article

  • vim-powerline colors are out of whack in urxvt

    - by komidore64
    I have attached two images showing what my vim-powerline looks like. As you can see, something has happened to the colors and I cannot figure out how to fix it. I'm running Fedora 17 on a clean install with i3 (default config) and urxvt. Here is my bashrc: # .bashrc if [[ "$(uname)" != "Darwin" ]]; then # non mac os x # source global bashrc if [[ -f "/etc/bashrc" ]]; then . /etc/bashrc fi export TERM='xterm-256color' # probably shouldn't do this fi # bash prompt with colors # [ <user>@<hostname> <working directory> {current git branch (if you're in a repo)} ] # ==> PS1="\[\e[1;33m\][ \u\[\e[1;37m\]@\[\e[1;32m\]\h\[\e[1;33m\] \W\$(git branch 2> /dev/null | grep -e '\* ' | sed 's/^..\(.*\)/ {\[\e[1;36m\]\1\[\e[1;33m\]}/') ]\[\e[0m\]\n==> " # execute only in Mac OS X if [[ "$(uname)" == 'Darwin' ]]; then # if OS X has a $HOME/bin folder, then add it to PATH if [[ -d "$HOME/bin" ]]; then export PATH="$PATH:$HOME/bin" fi alias ls='ls -G' # ls with colors fi alias ll='ls -lah' # long listing of all files with human readable file sizes alias tree='tree -C' # turns on coloring for tree command alias mkdir='mkdir -p' # create parent directories as needed alias vim='vim -p' # if more than one file, open files in tabs export EDITOR='vim' # super-secret work stuff if [[ -f "$HOME/.workbashrc" ]]; then . $HOME/.workbashrc fi # Add RVM to PATH for scripting if [[ -d "$HOME/.rvm/bin" ]]; then # if installed PATH=$PATH:$HOME/.rvm/bin fi and my Xdefaults: ! URxvt config ! colors! URxvt.background: #101010 URxvt.foreground: #ededed URxvt.cursorColor: #666666 URxvt.color0: #2E3436 URxvt.color8: #555753 URxvt.color1: #993C3C URxvt.color9: #BF4141 URxvt.color2: #3C993C URxvt.color10: #41BF41 URxvt.color3: #99993C URxvt.color11: #BFBF41 URxvt.color4: #3C6199 URxvt.color12: #4174FB URxvt.color5: #993C99 URxvt.color13: #BF41BF URxvt.color6: #3C9999 URxvt.color14: #41BFBF URxvt.color7: #D3D7CF URxvt.color15: #E3E3E3 ! options URxvt*loginShell: true URxvt*font: xft:DejaVu Sans Mono for Powerline:antialias=true:size=12 URxvt*saveLines: 8192 URxvt*scrollstyle: plain URxvt*scrollBar_right: true URxvt*scrollTtyOutput: true URxvt*scrollTtyKeypress: true URxvt*urlLauncher: google-chrome and finally my vimrc set nocompatible set dir=~/.vim/ " set one place for vim swap files " vundler for vim plugins ---- filetype off set rtp+=~/.vim/bundle/vundle call vundle#rc() Bundle 'gmarik/vundle' Bundle 'tpope/vim-surround' Bundle 'greyblake/vim-preview' Bundle 'Lokaltog/vim-powerline' Bundle 'tpope/vim-endwise' Bundle 'kien/ctrlp.vim' " ---------------------------- syntax enable filetype plugin indent on " Powerline ------------------ set noshowmode set laststatus=2 let g:Powerline_symbols = 'fancy' " show fancy symbols (requires patched font) set encoding=utf-8 " ---------------------------- " ctrlp ---------------------- let g:ctrlp_open_multiple_files = 'tj' " open multiple files in additional tabs let g:ctrlp_show_hidden = 1 " include dotfiles and dotdirs in ctrlp indexing let g:ctrlp_prompt_mappings = { \ 'AcceptSelection("e")': ['<c-t>'], \ 'AcceptSelection("t")': ['<cr>', '<2-LeftMouse>'], \ } " remap <cr> to open file in a new tab " ---------------------------- set showcmd set tabpagemax=100 set hlsearch set incsearch set nowrapscan set ignorecase set smartcase set ruler set tabstop=4 set shiftwidth=4 set expandtab set wildmode=list:longest autocmd BufWritePre * :%s/\s\+$//e "remove trailing whitespace " :REV to "revert" file to state of the most recent save command REV earlier 1f " disable netrw -------------- let g:loaded_netrw = 1 let g:loaded_netrwPlugin = 1 " ---------------------------- Any guidance as to fixing the statusline would be fantastic. I've found a github issue outlining almost the exact same problem, but the solution was never posted. Thank you.

    Read the article

  • Sharing the same `ssh-agent` among multiple login sessions

    - by intuited
    Is there a convenient way to ensure that all logins from a given user (ie me) use the same ssh-agent? I hacked out a script to make this work most of the time, but I suspected all along that there was some way to do it that I had just missed. Additionally, since that time there have been amazing advances in computing technology, like for example this website. So the goal here is that whenever I log in to the box, regardless of whether it's via SSH, or in a graphical session started from gdm/kdm/etc, or at a console: if my username does not currently have an ssh-agent running, one is started, the environment variables exported, and ssh-add called. otherwise, the existing agent's coordinates are exported in the login session's environment variables. This facility is especially valuable when the box in question is used as a relay point when sshing into a third box. In this case it avoids having to type in the private key's passphrase every time you ssh in and then want to, for example, do git push or something. The script given below does this mostly reliably, although it botched recently when X crashed and I then started another graphical session. There might have been other screwiness going on in that instance. Here's my bad-is-good script. I source this from my .bashrc. # ssh-agent-procure.bash # v0.6.4 # ensures that all shells sourcing this file in profile/rc scripts use the same ssh-agent. # copyright me, now; licensed under the DWTFYWT license. mkdir -p "$HOME/etc/ssh"; function ssh-procure-launch-agent { eval `ssh-agent -s -a ~/etc/ssh/ssh-agent-socket`; ssh-add; } if [ ! $SSH_AGENT_PID ]; then if [ -e ~/etc/ssh/ssh-agent-socket ] ; then SSH_AGENT_PID=`ps -fC ssh-agent |grep 'etc/ssh/ssh-agent-socket' |sed -r 's/^\S+\s+(\S+).*$/\1/'`; if [[ $SSH_AGENT_PID =~ [0-9]+ ]]; then # in this case the agent has already been launched and we are just attaching to it. ##++ It should check that this pid is actually active & belongs to an ssh instance export SSH_AGENT_PID; SSH_AUTH_SOCK=~/etc/ssh/ssh-agent-socket; export SSH_AUTH_SOCK; else # in this case there is no agent running, so the socket file is left over from a graceless agent termination. rm ~/etc/ssh/ssh-agent-socket; ssh-procure-launch-agent; fi; else ssh-procure-launch-agent; fi; fi; Please tell me there's a better way to do this. Also please don't nitpick the inconsistencies/gaffes ( eg putting var stuff in etc ); I wrote this a while ago and have since learned many things.

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • multi-user rvm gem install failure when called from CloudFormation::Init

    - by Peter Mounce
    I've taken an Amazon Linux AMI (based on CentOS) and installed RVM (1.10.3) to it in multi-user fashion (see {1} below). I used that to install ruby 1.9.3-p125, rubygems 1.8.17, and bundler 1.1 as the baseline requirements for most things I'm going to be using the instances for. I've captured that instance to an AMI, and am now launching it via CloudFormation, with some CloudFormation::Init commands. One of them is to use s3cmd to pull down a private gem from S3, and the next one, the one that fails, is to install that gem. It fails with an error message 2012-03-15 16:53:20,201 [ERROR] Command 20_install_gems (/usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem install ./*.gem) failed 2012-03-15 16:53:20,202 [DEBUG] Command 20_install_gems output: /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12:in `require': no such file to load -- rubygems (LoadError) from /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12 Now, that happens during the cfn-init execution - I assume, but haven't checked yet, that cfn-init is being run with an environment different from that of ec2-user (there are no other users on the instance). If I run gem install mygem.gem in an interactive session then that works fine. So, my question really, is what should I do to make this work for cfn-init? Have I correctly set up rvm as multi-user? I've confirmed that cfn-init is being run as the root user, with his restricted environment. How should I source the /etc/profile.d/rvm.sh into root's sessions? {1} My semi-automated rvm installation steps (run in interactive session as ec2-user): sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) sudo gpasswd -a ec2-user rvm # iconv-devel is baked into centos' glibc sudo yum install -y autoconf automake bison bzip2 gcc-c++ git libffi-devel libtool libxml2-devel libxslt-devel libyaml-devel make openssl-devel patch readline readline-devel zlib zlib-devel source /etc/profile.d/rvm.sh rvm list known # in a new session: rvm install ruby-1.9.3-p125 rvm use 1.9.3 --default gem update --system # gems required by public_web-awareness gem install aws-sdk bundler cocaine sinatra echo -e "gem: --no-ri --no-rdoc\n" > /home/ec2-user/.gemrc # delete unnecessary documentation files rm -rf `gem env gemdir`/doc sudo -s sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/skel/.gemrc sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/gemrc # ctrl + d out of the sudo session Some environment information: [ec2-user@ip ~]$ echo $PATH /usr/local/rvm/gems/ruby-1.9.3-p125/bin:/usr/local/rvm/gems/ruby-1.9.3-p125@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p125/bin:/usr/local/rvm/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/bin [ec2-user@ip ~]$ echo $GEM_HOME /usr/local/rvm/gems/ruby-1.9.3-p125 [ec2-user@ip ~]$ echo $GEM_PATH /usr/local/rvm/gems/ruby-1.9.3-p125:/usr/local/rvm/gems/ruby-1.9.3-p125@global [ec2-user@ip ~]$ echo $BUNDLE_PATH [ec2-user@ip ~]$ gem list *** LOCAL GEMS *** aws-sdk (1.3.6) bundler (1.1.0) cocaine (0.2.1) httparty (0.8.1) json (1.6.5) multi_json (1.1.0) multi_xml (0.4.1) nokogiri (1.5.1, 1.5.0) rack (1.4.1) rack-protection (1.2.0) rake (0.9.2) sinatra (1.3.2) tilt (1.3.3) uuidtools (2.1.2) yamler (0.1.0)

    Read the article

  • Cannot get official CentOS 5.4 BIND package to start

    - by Brian Cline
    Yesterday I installed CentOS 5.4 on one of my servers, and it appears that the official BIND/named package has trouble starting for reasons I cannot deduce. Here is what happens: [root@hal init.d]# service named start Starting named: Error in named configuration: /etc/named.conf:57: open: named.root.hints: permission denied [FAILED] The line in question, with the directory option for context: // further up in the file: directory "/var/named"; // line 57: include "named.root.hints"; Like you, my first reaction was to check permissions on /var/named/named.root.hints, /var/named, and /var to make sure the named user would be able to read it. Here are the permissions at each level: drwxr-xr-x 19 root root 4096 Nov 3 02:05 var drwxr-x--- 5 root named 4096 Nov 3 02:36 named -rw-r--r-- 1 named named 524 Mar 29 2006 named.root.hints Everything appears to be fine permission-wise. The same error occurs if the /var/named directory is writable by the named user. I've even temporarily allowed the named user to log in via bash, su'ed from root to named, and checked that I was, in fact, able to cat /var/named/named.root.hints successfully. (Yes, don't worry: I changed the shell back to nologin). My last endeavor showed that BIND is able to run under the named user account and start up just fine, if done so manually: [root@hal ~]# named -u named -g 03-Nov-2009 16:31:02.021 starting BIND 9.3.6-P1-RedHat-9.3.6-4.P1.el5 -u named -g 03-Nov-2009 16:31:02.021 adjusted limit on open files from 1024 to 1048576 03-Nov-2009 16:31:02.021 found 2 CPUs, using 2 worker threads 03-Nov-2009 16:31:02.021 using up to 4096 sockets 03-Nov-2009 16:31:02.028 loading configuration from '/etc/named.conf' 03-Nov-2009 16:31:02.030 using default UDP/IPv4 port range: [1024, 65535] 03-Nov-2009 16:31:02.031 using default UDP/IPv6 port range: [1024, 65535] 03-Nov-2009 16:31:02.034 listening on IPv4 interface lo, 127.0.0.1#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth0, 10.0.0.5#53 03-Nov-2009 16:31:02.034 listening on IPv4 interface eth1, ww.xx.yy.zz#53 03-Nov-2009 16:31:02.040 command channel listening on 127.0.0.1#953 03-Nov-2009 16:31:02.040 command channel listening on ::1#953 03-Nov-2009 16:31:02.040 ignoring config file logging statement due to -g option 03-Nov-2009 16:31:02.041 zone 0.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.127.in-addr.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.042 zone 255.in-addr.arpa/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.042 zone 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN/localhost_resolver: loaded serial 1997022700 03-Nov-2009 16:31:02.043 zone localdomain/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone localhost/IN/localhost_resolver: loaded serial 42 03-Nov-2009 16:31:02.043 zone x.y.z.in-addr.arpa/IN/internal: loaded serial 1 03-Nov-2009 16:31:02.044 zone x.y.z/IN/internal: loaded serial 2 03-Nov-2009 16:31:02.045 running What type and size of firearm should I use to resolve this? I'd prefer something with automatic ammunition, and, at worst, it should be able to fit on my shoulder. Of course I am open to suggestions.

    Read the article

  • Installing rtorrent on my ubuntu server

    - by Shishant
    Hello, I am try to install rtorrent on my ubuntu server. I ran these commands and they worked fine. ./autogen.sh ./configure --with-xmlrpc-c make and then when i tried to use make install i guess it didnt get install because no .rtorrent.rc' was created in home directory and running rtorrent returned this error rtorrent: error while loading shared libraries: libtorrent.so.11: cannot open shared object file: No such file or directory below is the log of my make install. root@ubuntu:~/rtorrent-0.8.6# make install Making install in doc make[1]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Entering directory `/root/rtorrent-0.8.6/doc' make[2]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/man/man1" || /bin/mkdir -p "/usr/local/share/man/man1" /usr/bin/install -c -m 644 './rtorrent.1' '/usr/local/share/man/man1/rtorrent.1 ' make[2]: Leaving directory `/root/rtorrent-0.8.6/doc' make[1]: Leaving directory `/root/rtorrent-0.8.6/doc' Making install in src make[1]: Entering directory `/root/rtorrent-0.8.6/src' Making install in core make[2]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Entering directory `/root/rtorrent-0.8.6/src/core' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/core' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/core' Making install in display make[2]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Entering directory `/root/rtorrent-0.8.6/src/display' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/display' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/display' Making install in input make[2]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Entering directory `/root/rtorrent-0.8.6/src/input' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/input' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/input' Making install in rpc make[2]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Entering directory `/root/rtorrent-0.8.6/src/rpc' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/rpc' Making install in ui make[2]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Entering directory `/root/rtorrent-0.8.6/src/ui' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/ui' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/ui' Making install in utils make[2]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Entering directory `/root/rtorrent-0.8.6/src/utils' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Leaving directory `/root/rtorrent-0.8.6/src/utils' make[2]: Entering directory `/root/rtorrent-0.8.6/src' make[3]: Entering directory `/root/rtorrent-0.8.6/src' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'rtorrent' '/usr/loc al/bin/rtorrent' libtool: install: /usr/bin/install -c rtorrent /usr/local/bin/rtorrent make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/root/rtorrent-0.8.6/src' make[2]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Leaving directory `/root/rtorrent-0.8.6/src' make[1]: Entering directory `/root/rtorrent-0.8.6' make[2]: Entering directory `/root/rtorrent-0.8.6' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/root/rtorrent-0.8.6' make[1]: Leaving directory `/root/rtorrent-0.8.6' Thank You.

    Read the article

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • Adaptive ADF/WebCenter template for the iPad

    - by Maiko Rocha
    One of my WebCenter Portal customers was asking about adaptive design with ADF/WebCenter Portal and how they could go about creating an adaptive iPad template for their WebCenter Portal application. They were looking not only for the out-of-the-box support for mobile Safari which is certified against PS5+ (11.1.1.6) for ADF/WebCenter - but also to create a specific template to streamline their workflow on the iPad. Seems like they wanted something in the lines of Yahoo! Mail provides for the iPad - so the example I will use is shamelessly inspired by Y! Mail's iPad UI.  But first, let's quickly understand how can we bake in some adaptive goodness into ADF Faces. First thing we need to understand is, yes, there are a couple of constraints that we will need to work around, namely, the use or layout managers and skins. Please also keep in mind that I'm not and I don't pretend to be a web designer, much less an UX specialist, so feel free to leave your thoughts on the matter in the comments section. Now, back to the limitations. Layout Managers ADF Faces layout managers create an abstraction on top of the generated HTML code for a page so a developer doesn't need to be worried about how to size and dimension the UI layout (eg, af:panelStretchLayout). Although layout managers are very helpful, in this specific situation we will need to know a little bit more of how the final HTML is being rendered so we can apply the CSS class accordingly and create transition containers where the media queries will be applied - now, if you're using 11gR2 (11.1.2.2.3) there's the new component af:panelGridLayout (here and here) that will greatly improve creating responsive templates and pages because it is based on the grid/fluid systems and will generate straight out to DIVs on your final page. For now, I'm limited to PS5 and the af:panelStretchLayout component as a starting point because that's the release my customer is on. Skins You won't be able to use media queries, or use anything with "@" notation on the skin CSS file - the skin pre-processor will remove all extraneous "@" from the CSS file. The solution is to split your CSS in two separate files: a skin CSS file and plain CSS where you will add the media queries. The issue here is that you won't be able to use media queries for any faces components. We can, though, still apply the media queries for the components like af:panelGroupLayout and af:panelBorderLayout through their styleClass property to enable these components to be responsive to to the iPad orientation, by changing its dimensions, font sizes, hide/show areas, etc. Difference between responsive and adaptive design The best definition of adaptive vs responsive web design I could find is this: “Responsive web design,” as coined by Ethan Marcotte, means “fluid grids, fluid images/media & media queries.” “Adaptive web design,” as I use it, is about creating interfaces that adapt to the user’s capabilities (in terms of both form and function). To me, “adaptive web design” is just another term for “progressive enhancement” of which responsive web design can (an often should) be an integral part, but is a more holistic approach to web design in that it also takes into account varying levels of markup, CSS, JavaScript and assistive technology support. Responsive/adapative web design is much more than slapping an HTML template with CSS around your content or application. The content and application themselves are part of your web design - in other words, a responsive template is just an afterthought if it is not originating from a responsive design the involves the whole web application/s. Tips on responsive / adapative design with ADF/WebCenter Some of the tips listed below were already mentioned in multiple blog posts about ADF layout and skinning, but it is still worth remembering: a simple guideline for ADF/WebCenter apps would be to first create a high-level group of devices, for example: smartphones, tablets,  and desktop. For each of these large groups, create the basic structure to provide responsiveness: a page template, a skin, and an external CSS: pagetemplate_smartphone.jspx, smartphone_skin.css, smartphone-responsive.css pagetemplate_tablet.jspx, tablet_skin.css, tablet-responsive.css pagetemplate_desktop.jspx, desktop_skin.css, desktop-responsive.css These three assets can be changed on the fly through an user-agent check on the server side, delivering the right UI to the right device. Within each of the assets, you can make fine adjustments for each subgroup of devices with media queries - for example, smart phones with different screen dimensions and pixel density. Having these three groups and the corresponding assets per group seem to be a good compromise between trying to put everything on a single set of assets - specially considering the constraints above - and going to the other side of the spectrum to create assets per discrete device (iPhone4, iPhone5, Nexus, S3, etc.). Keep in mind that these are my rules and are not in any shape or form a best practice - this is how it fits best for the scenarios I've been working with. If you need to use HTML tags on your page, surround them with af:group to protect the DOM structure For stretchable/fluid layouts: Use non-stretching containers: panelGroupLayout, panelBorderLayout, … panelBorderLayout can be used to approximate HTML table component To avoid multiple scroll bars, do not nest scrolling PanelGroupLayout components. Consider layout="vertical" For stretchable/fluid layouts: Most stretchable ADF components also work in flowing context with dimensionsFrom="auto" To stretch a component horizontally, use styleClass="AFStretchWidth" instead of  "width:100%" Skinning Don't use CSS3 @media, @import, animations, etc. on skin css files. They will be removed. CSS3 properties within a class (box-shadow, transition, etc.) work just fine. Consider resetting some skin classes to better control their rendering: body {color: inherit;font: inherit;} af|document {-tr-inhibit: all;} af|commandLink {-tr-inhibit: all;} af|goLink {-tr-inhibit: all;} af|inputText::content {font: inherit;} Specific meta tags and CSS properties: Use  <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"/> to avoid zooming (if you want) Use -webkit-overflow-scrolling: touch to enable native momentum scrolling within overflown areas (here) Use text-rendering: optmizeLegibility to improve readability. (here) User text-overflow: ellipsis to gracefully crop overflown text. (here) The meta-tags are included in each and every page in the metaContainer facet of af:document tag. You can also use a javascript to inject the meta-tags from the template. For the purpose of the example, I wanted to use as few workarounds as possible.   The iPad template and sample application This sample application has been built as a WebCenter Portal application, but you will also be able to reuse the template and techniques on your vanilla ADF application. Keep in mind that I'm neither a designer nor a CSS specialist, so please don't bash me too much on the messy CSS file you'll find on the application.  I've extended the provided PreferencesBean class that comes with WebCenter Portal and added code to dinamically change the template and skin on the fly.   This is the sample application in landscape orientation: This is the sample application in portrait orientation - the left side menu hides automatically based on a CSS media query: Another screenshot with a skinned popup opened: This is a sample application for you to play with - ideally you shouldn't use it as a starting point. On the left side bar you will find links rendered from a WebCenter Portal navigation model - the link triggers a full request through an af:goLink, while the light blue PPR button triggers a PPR navigation. The dark blue toolbar buttons at the top don't have any function,while the Approve and Reject buttons show a skinned popup. The search box of course doesn't have any behavior attahed to it either. There's a known issue right now with some PPR calls that are randomly generating a 403 error redirecting to the login page - I didn't have time to investigate if this is iOS6 specific or not - if you have any insights please let me know your findings. You can download the sample here.

    Read the article

  • Command line mode only -- successful login only brings me back to login screen

    - by seth
    whenever I log in the screen goes black, I see a glimpse of terminal-esque text, and then it brings me back to the log in screen (Ubuntu 12.04). I can enter and log in via the command line. The guest account works find. I think this happened because I edited some Xorg related file trying to make an external monitor work with my laptop. I copy pasted from a forum post so I dont recall the file or what i put in the file. Can't find the forum post again and my bash history wasn't recorded from that session. I tried reinstalling Xorg and ubuntu-desktop, nvidia, resetting any configs I could find... I'm really at a loss of what to do. Here's my /.xsession-errors: /usr/sbin/lightdm-session: 11: /home/seth/.profile: -s: not found Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/seth/.compiz/session/108fa6ea48f8a973b9133850948930576700000017740033" Initializing session options...done Initializing gnomecompat options...done ** Message: applet now removed from the notification area Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:1846): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. Setting Update "main_menu_key" Setting Update "run_key" Setting Update "launcher_hide_mode" Setting Update "edge_responsiveness" Setting Update "launcher_capture_mouse" ** Message: moving back from GtkStatusIcon to indicator compiz (decor) - Warn: failed to bind pixmap to texture ** (zeitgeist-datahub:2128): WARNING **: zeitgeist-datahub.vala:227: Unable to get name "org.gnome.zeitgeist.datahub" on the bus! failed to create drawable compiz (core) - Warn: glXCreatePixmap failed compiz (core) - Warn: Couldn't bind background pixmap 0x1e00001 to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. compiz (decor) - Warn: failed to bind pixmap to texture compiz (decor) - Warn: failed to bind pixmap to texture ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. ** Message: No keyring secrets found for Sonic.net_356/802-11-wireless-security; asking user. [2348:2352:12678840568:ERROR:gpu_watchdog_thread.cc(231)] The GPU process hung. Terminating after 10000 ms. [2256:2283:14450711755:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450726175:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14450746028:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464521342:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14464541249:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690775186:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14690795231:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704543843:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14704566717:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14766138587:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14857232694:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930901403:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14930965542:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944566814:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:14944592215:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170929788:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15170947382:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184585015:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15184605475:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15366189036:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15410983381:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15411569689:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431632431:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15431674438:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15457304356:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656020938:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15656042383:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674651268:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:15674671786:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16052544301:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16057387653:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157122849:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157123851:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157125473:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157126544:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 [2256:2283:16157129682:ERROR:ssl_client_socket_nss.cc(1542)] handshake with server mail.google.com:443 failed; NSS error code -5938, net_error -107 If anyone can help me out, I'd be forever grateful

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150  | Next Page >