Search Results

Search found 974 results on 39 pages for 'oct'.

Page 12/39 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Java Embedded @ JavaOne coming soon...

    - by hinkmond
    The "Internet of Things" is coming your way to the Java Embedded sub-conference at JavaOne 2012 next week: Oct. 3 - Oct. 4 in San Francisco. Get ready to learn how Java Embedded technologies and solutions offer compelling value. See: Java Embedded @ JavaOne Here's a quote: The conference is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, with a unique opportunity to meet together and learn about how they can use Java embedded technologies to enable new business strategies. It's the place to be for Java Embedded techies. Hinkmond

    Read the article

  • filling in the holes in the result of a query

    - by ????? ????????
    my query is returning: +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 9 | 0 | 0 | 0 | 0 | 8 | 37 | 29 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 374 | 30 | | 0 | 0 | 1 | 0 | 78 | 2 | 4 | 8 | 57 | 169 | 116 | 602 | 31 | | 156 | 255 | 79 | 75 | 684 | 325 | 289 | 194 | 407 | 171 | 584 | 443 | 32 | | 1561 | 2852 | 2056 | 796 | 2004 | 1755 | 879 | 1052 | 1490 | 1683 | 2532 | 2381 | 33 | | 4167 | 3841 | 4798 | 3399 | 4132 | 5849 | 3157 | 4381 | 4424 | 4487 | 4178 | 5343 | 34 | | 5472 | 5939 | 5768 | 4150 | 7483 | 6836 | 6346 | 6288 | 6850 | 7155 | 5706 | 5231 | 35 | | 5749 | 4741 | 5264 | 4045 | 6544 | 7405 | 7524 | 6625 | 6344 | 5508 | 6513 | 3854 | 36 | | 5464 | 6323 | 7074 | 4861 | 7244 | 6768 | 6632 | 7389 | 8077 | 8745 | 6738 | 5039 | 37 | | 5731 | 7205 | 7476 | 5734 | 9103 | 9244 | 7339 | 8970 | 9726 | 9089 | 6328 | 5512 | 38 | | 7262 | 6149 | 8231 | 6654 | 9886 | 9834 | 9306 | 10065 | 9983 | 9984 | 6738 | 5806 | 39 | | 5886 | 6934 | 7137 | 6978 | 9034 | 9155 | 7389 | 9437 | 9711 | 8665 | 6593 | 5337 | 40 | +------+------+------+------+------+------+------+-------+------+------+------+------+-----+ as you can see the BLA column starts from 13. i want it to start from 1, then 2, then 3 etc......I do not want any gaps in the data. The reason there are gaps is because all of the months are 0 for that specific bla how do i get the result set to include ALL values for BLA, even ones that will yield 0 for the months? here are the desired results: +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Bla | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 | | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 14 | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 | | … | … | … | … | … | … | … | … | … | … | … | … | … | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ here's my query: WITH CTE AS ( select sum(case when datepart(month,[datetime entered]) = 1 then 1 end) as Jan, sum(case when datepart(month,[datetime entered]) = 2 then 1 end) as Feb, sum(case when datepart(month,[datetime entered]) = 3 then 1 end) as Mar, sum(case when datepart(month,[datetime entered]) = 4 then 1 end) as Apr, sum(case when datepart(month,[datetime entered]) = 5 then 1 end) as May, sum(case when datepart(month,[datetime entered]) = 6 then 1 end) as Jun, sum(case when datepart(month,[datetime entered]) = 7 then 1 end) as Jul, sum(case when datepart(month,[datetime entered]) = 8 then 1 end) as Aug, sum(case when datepart(month,[datetime entered]) = 9 then 1 end) as Sep, sum(case when datepart(month,[datetime entered]) = 10 then 1 end) as Oct, sum(case when datepart(month,[datetime entered]) = 11 then 1 end) as Nov, sum(case when datepart(month,[datetime entered]) = 12 then 1 end) as Dec, DATEPART(yyyy,[datetime entered]) as [Year], bla= CASE WHEN datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) >191 THEN 192 ELSE datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108) END --,datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE)) AS Sort_Days, --DATEPART(hour, [datetime completed] ) AS Sort_Hours from [TurnAround] group by datediff(d, CAST([datetime entered] as DATE), CAST([datetime completed] as DATE))*24 + CONVERT(CHAR(2),[datetime completed],108), DATEPART(yyyy,[datetime entered]) , [datetime entered] --[DateTime Completed] ) SELECT ISNULL(SUM(Jan),0) Jan, ISNULL(SUM(Feb),0) Feb, ISNULL(SUM(Mar),0) Mar, ISNULL(SUM(Apr),0) Apr, ISNULL(SUM(May),0) May, ISNULL(SUM(Jun),0) Jun, ISNULL(SUM(Jul),0) Jul, ISNULL(SUM(Aug),0) Aug, ISNULL(SUM(Sep),0) Sep, ISNULL(SUM(Oct),0) Oct, ISNULL(SUM(Nov),0) Nov, ISNULL(SUM(Dec),0) Dec, [year], --,Sort_Hours, --Sort_Days, A.RN Bla FROM ( SELECT *, RN=ROW_NUMBER() OVER(ORDER BY object_id) FROM sys.all_objects) A LEFT JOIN CTE B ON A.RN = CASE WHEN B.Bla > 191 THEN 192 ELSE B.Bla END WHERE A.RN BETWEEN 1 AND 192 GROUP BY A.RN,[year]

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

  • postfix sasl "cannot connect to saslauthd server: No such file or directory"

    - by innotune
    I try to setup postfix with smtp authentication. I want to use /etc/shadow as my realm Unfortunately I get a "generic error" when i try to authenticate # nc localhost 25 220 mail.foo ESMTP Postfix AUTH PLAIN _base_64_encoded_user_name_and_password_ 535 5.7.8 Error: authentication failed: generic failure In the mail.warn logfile i get the following entry Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: SASL authentication failure: Password verification failed Oct 8 10:43:40 mail postfix/smtpd[1060]: warning: _ip_: SASL PLAIN authentication failed: generic failure However the sasl setup seems to be fine $ testsaslauthd -u _user_ -p _pass_ 0: OK "Success." i added smtpd_sasl_auth_enable = yes to the main.cf This is my smtpd.conf $ cat /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: PLAIN LOGIN saslauthd_path: /var/run/saslauthd/mux autotransition:true I tried this conf with the last two commands and without. I'm running debian stable. How can postfix find and connect to the saslauthd server? Edit: I'm not sure whether postfix runs in a chroot The master.cf looks like this: http://pastebin.com/Fz38TcUP saslauth is located in the sbin $ which saslauthd /usr/sbin/saslauthd The EHLO has this response EHLO _server_name_ 250-_server_name_ 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Why apache doesn't restart after configuring SSL?

    - by poz2k4444
    I've installed apache2 and then configure it to work with SSL following this and this tutorials, the problem becomes when I try to restart the service, the following error throws: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs the output of netstat -anp | grep 443 just display firefox listening and anything else, how could I solve this and get the service running?? The ouput of ps -Af|grep <firefox PID> is: root 1949 1 11 18:42 tty1 00:20:55 /opt/firefox/firefox-bin root 2025 1949 4 18:43 tty1 00:08:39 /opt/firefox/plugin-container /root/.mozilla/plugins/libflashplayer.so -greomni /opt/firefox/omni.ja 1949 true plugin after closing firefox and then cheking again for port 443 the output is: tcp 0 0 10.32.208.179:38923 74.125.139.155:443 TIME_WAIT - tcp 0 0 10.32.208.179:45706 74.125.139.113:443 TIME_WAIT - tcp 0 0 10.32.208.179:40456 74.125.139.156:443 TIME_WAIT - tcp 0 0 10.32.208.179:56823 69.171.227.62:443 FIN_WAIT2 - unix 3 [ ] STREAM CONNECTED 12443 1721/dbus-daemon @/tmp/dbus-8ee35rmOOS Seeing the error logs, which are not at the time when I'm doing this, the last errors are: [Tue Oct 02 18:41:54 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/sever.crt [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • How to check that all ZFS snapshots within a pool are without holds before destroying that pool

    - by Graham Perrin
    Question Already I can check each snapshot of a filesystem individually, manually. I would prefer to check all at once (all with a single command or script). Please: can that be done with a script? Background From the man page for zfs(8): zfs holds [-H] [-r] snapshot… … -r Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems. I wondered whether recent snapshots are treated as descendants of older snapshot. No: Last login: Sat Dec 8 09:02:26 on ttys003 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs hold experiment gjp22@2012-12-08-081957 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP gjp22@2012-12-08-081957 experiment Sat Dec 8 9:04 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$

    Read the article

  • Nagios command not transmitting all arguments

    - by markus
    I'm using the following service to monitor our postgres db from nagios: define service{ use test-service ; Name of servi$ host_name DEMOCGN002 service_description Postgres State check_command check_nrpe!check_pgsql!192.168.1.135!test!test!test notifications_enabled 1 } On the remote machine I've configured the command: command[check_pgsql]=/usr/lib/nagios/plugins/check_pgsql -H $ARG1$ -d $ARG2$ -l $ARG3$ -p $ARG4$ In the syslog I can see that command is executed, but there is only one argument transmitted: Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Running command: /usr/lib/nagios/plugins/check_pgsql -H 192.168.1.134 -d -l -p Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Command completed with return code 3 and output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Return Code: 3, Output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Why are arguments 2,3 and 4 missing?

    Read the article

  • mysqld refusing connections from localhost

    - by Dennis Rardin
    My mail server (Ubuntu 10.04) uses mysql for virtual domains, virtual users. For some reason, mysqld has started refusing connections from localhost. I see these in the mail server log: Oct 6 00:31:14 apollo postfix/trivial-rewrite[16888]: fatal: proxy:mysql:/etc/postfix/mysql-virtual_domains.cf(0,lock|fold_fix): table lookup problem and: Oct 7 13:39:15 apollo postfix/proxymap[25839]: warning: connect to mysql server 127.0.0.1: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I also get the following in auth.log: Oct 6 22:33:31 apollo mysqld[31775]: refused connect from 127.0.0.1 Telnet to the local port: root@apollo:/var/log/mysql# telnet localhost 3306 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. root@apollo:/var/log/mysql# I am not sure why this started happening, but there was a disk failure in a RAID 1 pair a bit earlier that day. So it's possible I have a damaged config file or something. But mail was working for at least an hour after the drive event, so who knows for sure? phpmyadmin works fine, and the databases themselves look like they're intact. I think/believe that selinux and iptables are disabled and not running. So ... why is mysqld refusing connections from localhost? What should I check? What processes might cause this if a .conf file or possibly a binary was damaged? Which other log files might contain clues? I've enabled "general logging" in /etc/mysql/my.cnf, but I get no interesting or informative entries there. Thanks, m00tpoint

    Read the article

  • Immediate logout after login with PAM, Kerberos, and LDAP

    - by Dylan Klomparens
    I've set up remote login on a computer using Kerberos and LDAP. I've also configured NFS to mount onto /home so that the user's home directory is the same wherever they login. Kerberos authentication seems to work fine. I can get a ticket using kinit user1 (assuming user1 is a remote user) and see the ticket with klist. I'm pretty sure LDAP is working because I see the proper output from getent passwd, which lists all the remote users. The contents of /home are present when I list the files. The problem is: when I try to login as a remote user the session is immediately ended. Why is it not letting me stay logged in? Here is the output from /var/log/messages after a login attempt: # /var/log/messages: Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: authentication succeeds for 'user1' ([email protected]) Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: pam_setcred (establish credential) called Oct 9 10:57:53 tophat login[6472]: pam_krb5[6472]: pam_setcred (delete credential) called EDIT: The distro is openSUSE. Here are the common-* files in /etc/pam.d:   # /etc/pam.d/common-account account required pam_unix.so   # /etc/pam.d/common-auth auth sufficient pam_krb5.so minimum_uid=1000 auth required pam_unix.so nullok_secure try_first_pass   # /etc/pam.d/common-session session optional pam_umask.so umask=002 session sufficient pam_krb5.so minimum_uid=1000 session required pam_unix.so There doesn't appear to be a /var/log/auth.log file nor a /var/log/secure file.

    Read the article

  • ActiveMQ Configuration with KahaDB

    - by xeraa
    We are using ActiveMQ 5.6.0 with KahaDB. It has produced quite some log files, which is to be expected with our infrastructure, looking like this: $ ll -h /opt/activemq/data/kahadb/ total 969M drwxr-xr-x 2 root root 4.0K Nov 3 12:47 ./ drwxr-xr-x 3 activemq activemq 4.0K Sep 24 12:12 ../ -rw-r--r-- 1 root root 39M Oct 16 07:57 db-202.log -rw-r--r-- 1 root root 38M Oct 16 07:57 db-203.log -rw-r--r-- 1 root root 33M Oct 17 08:12 db-238.log ... No more messages were processed, when we ran into the 1GB temp usage limit. Or that's what we are assuming, is that correct? The configuration looks like this: <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="512mb"/> </memoryUsage> <storeUsage> <storeUsage limit="3 gb"/> </storeUsage> <tempUsage> <tempUsage limit="1 gb"/> </tempUsage> </systemUsage> </systemUsage> After cleaning up the log files and being way below the limits, still no messages were consumed by AMQ. Only when we manually purged a route, messages were starting to be delivered again. So we need to ensure, that the KahaDB log size always stays below the temp usage, right? And that delivery was not picked up after fixing that is a bug or are there any other steps to be taken?

    Read the article

  • Symlink path can be followed manually, but `cd` returns Permission denied

    - by Ricket
    I am trying to access the directory /usr/software/test/agnostic. There are several symlinks involved in this path. As you can see by the below transcript, I am unable to cd directly to the path, but I can check each step of the way and cd to the symlinked directories until I reach the destination. Why is this? (and how do I fix it?) Ubuntu 12.10, bash > ls /usr/software/test/agnostic ls: cannot access /usr/software/test/agnostic: Permission denied > cd /usr/software/test > cd agnostic bash: cd: agnostic: Permission denied > pwd -P /x/eng/localtest/arch/x86_64-redhat-rhel5 > ls -al | grep agnostic lrwxrwxrwx 1 root root 15 Oct 23 2007 agnostic -> noarch/agnostic > ls -al | grep noarch ... lrwxrwxrwx 1 root root 23 Oct 23 2007 noarch -> /x/eng/localtest/noarch > cd noarch > cd agnostic bash: cd: agnostic: Permission denied > ls -al | grep agnostic lrwxrwxrwx 1 5808 dip 4 Oct 5 2010 agnostic -> main > cd main > ls (correct output of `ls`) > pwd /usr/software/test/noarch/main > pwd -P /x/eng/localtest/noarch/main

    Read the article

  • Convert GMT time to local time

    - by Dibish
    Am getting a GMT time from my server in this format Fri, 18 Oct 2013 11:38:23 GMT My requirement is to convert this time to local time using Javascript, eg:/ if the user is from India, first i need to take the time zone +5.30 and add that to my servertime and convert the time string to the following format 2013-10-18 16:37:06 I tried with following code but not working var date = new Date('Fri, 18 Oct 2013 11:38:23 GMT'); date.toString(); Please help me to solve this issue, Thanks in advance

    Read the article

  • SQL Oracle Combining Multiple Results Rows

    - by Stuav
    I have the below query Select case upper(device_model) when 'IPHONE' then 'iOS - iPhone' when 'IPAD' then 'iOS - iPad' when 'IPOD TOUCH' then 'iOS - iPod Touch' Else 'Android' End As Device_Model, count(create_dtime) as Installs_Oct17_Oct30 From Player Where Create_Dtime >= To_Date('2012-Oct-17','yyyy-mon-dd') And Create_Dtime <= To_Date('2012-Oct-30','yyyy-mon-dd') Group By Device_Model Order By Device_Model This spits out multiple rows of results that read "Android"....I would like there to be only 4 results rows, one for each case....so it comes out like this: Device_Model Installs_Oct17_Oct30 Android 987 iOS - iPad 12003 iOS - iPhone 8563 iOS- iPod Touch 3482

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • Proactive Support Sessions at OUG London and OUG Ireland

    - by THE
    .conf td { width: 350px; border: 1px solid black; background-color: #ffcccc; } table { border: 1px solid black; } tr { border: 0px solid black; } td { border: 1px solid black; padding: 5px; } Oracle Proactive Support Technology is proud to announce that two members of its team will be speaking at the UK and Ireland User Group Conferences this year. Maurice and Greg plan to run the following sessions (may be subject to change): Maurice Bauhahn OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Profit from Oracle Diagnostic Tools Embedded in EPM Oracle bundles in many of its software suites valuable toolsets to collect logs and settings, slice/dice error messages, track performance, and trace activities across services. Become familiar with several enterprise-level diagnostic tools embedded in Enterprise Performance Management (Enterprise Manager Fusion Middleware Control, Remote Diagnostic Agent, Dynamic Monitoring Service, and Oracle Diagnostic Framework). Expedite resolution of Service Requests as you learn to upload output from these tools to My Oracle Support. Who will benefit from attending the session? Geeks will find this most beneficial, but anyone who raises Oracle technical service requests will learn valuable pointers that may speed resolution. The focus is on the EPM stack, but this session will benefit almost everyone who needs to drill deeper into Oracle software environments. What will delegates learn from the session? Delegates who participate in this session will learn: How to access and run Remote Diagnostic Agent, Enterprise Manager Fusion Middleware Control, Dynamic Monitoring Service, and Oracle Diagnostic Framework. How to exploit the strengths of each tool. How to pass the outputs to My Oracle Support. How to restrict exposure of sensitive information. OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Using EPM-Specific Troubleshooting Tools EPM developers have created a number of EPM-specific tools to collect logs and configuration files, centralize configuration information, and validate a configured installation (Ziplogs, EPM Registry Editor, [Deployment Report, Registry Cleanup Utility, Reset Configuration Tool, EPMSYS Hostname Check] and Validate [EPM System Diagnostic]). Learn how to use these tools on your own or to expedite Service Request resolution. Who will benefit from attending the session? Anyone who monitors Oracle EPM environments or raises service requests will learn valuable lessons that could speed resolution of those requests. Anyone from novices to experts will benefit from this review of custom troubleshooting EPM tools. What will delegates learn from the session? Learn where to locate and start EPM troubleshooting tools created by EPM developers Learn how to collect and upload outputs of EPM troubleshooting tools. Adapt to history of changes in these tools across time and version. Learn how to make critical changes in configurations. Grzegorz Reizer OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 EPM 11.1.2.2: Detailed overview of new features and improvements in Financial Management products. This presentation is a detailed overview of new features and improvements introduced in Enterprise Performance Management 11.1.2.2 for Financial Management products (Hyperion Financial Managment, Hyperion Planning, Financial Close Management). The presentation will cover a number of new product features from recently introduced configurable dimensionality in HFM to new functionality enhancements in Planning. We'll close the session with an overview of upgrade options from earlier product releases.

    Read the article

  • Oracle OpenWorld 2012: The Best Just Gets Better

    - by kellsey.ruppel
    For almost 30 years, Oracle OpenWorld has been the world's premier learning event for Oracle customers, developers, and partners. With more than 2,000 sessions providing best practices; demos; tips and tricks; and product insight from Oracle, customers, partners, and industry experts, Oracle OpenWorld provides more educational and networking opportunities than any other event in the world. 2011 Facts Attendees from 117 Countries Used Filtered Tap Water to Eliminate 22 Tons of Plastic Bottles Diverted Enough Trash to Fill 37 Dump Trucks 45,000+ Total Registered Attendees Oracle OpenWorld 2012: The Best Just Gets Better What's New? What's Different?  This year Oracle OpenWorld will include the Executive Edge @ OpenWorld (replacing Leaders Circle), the Customer Experience Summit @ OpenWorld, JavaOne, MySQL Connect, and the expanded Oracle PartnerNetwork Exchange @ OpenWorld. More than 50,000 customers and partners will attend OpenWorld to see Oracle's newest hardware and software products at work, and learn more about our server and storage, database, middleware, industry, and applications solutions.  New This Year: The Executive Edge @ Oracle OpenWorld (Oct 1 - 2) New at Oracle OpenWorld this year, the Executive Edge @ OpenWorld (replacing Leaders Circle) will bring together customer, partner and Oracle executives for two days of keynote presentations, summits targeted to customer industries and organizational roles, roundtable discussions, and great new networking opportunities. The Customer Experience Revolution Is Here!Customer Experience Summit @ Oracle OpenWorld (Oct 3 - 5) This dynamic new program offers more than 60 keynotes, roundtables and networking sessions exploring trends, innovations and best practices to help companies succeed with a customer experience-driven business strategy.  All Things Java -- JavaOne (Sep 30 - Oct 4) JavaOne is the world's most important event for the Java developer community. Technical sessions cover topics that span the breadth of the Java universe, with keynotes from the foremost Java visionaries and expert-led hands-on learning opportunities.  Are you innovating with Oracle Fusion Middleware?  If you are, then you need to know that the Call for Nominations for the 2012 Oracle Fusion Middleware Innovation Awards is open now through July 17, 2012. Jointly sponsored by Oracle, AUSOUG, IOUG, OAUG, ODTUG, QUEST, and UKOUG, the Oracle Fusion Middleware Innovation Awards honor organizations creatively using Oracle Fusion Middleware to deliver unique value to their enterprise.  Winning customers and partners will be hosted at Oracle OpenWorld 2012, where they can connect with Oracle executives, network with peers, and be featured in an upcoming edition of Oracle Magazine. Be sure to submit your WebCenter use case today! Oracle Music Festival his year, the first-ever Oracle Music Festival will debut, running from September 30 to October 4. In the tradition of great live music events like Coachella and SXSW, the streets of San Francisco—from 7:00 p.m. to 1:00 a.m. for five nights-into-days—will vibrate with the music of some of today’s hottest name acts, emerging and local bands, and scratching DJs. Outdoor venues and clubs near Moscone Center and the Zone (including 111 Minna, DNA, Mezzanine, Roe, Ruby Skye, Slim’s, the Taylor Street Café, Temple, Union Square, and Yerba Buena Gardens) will showcase acts that range from reggae to rock, punk to ska, R&B to country, indie to honky-tonk. After a full day of sessions and networking, you'll be primed for some late-night relaxation and rocking out at one or more of these sets.  Please note that with awesome acts, thousands of music devotees, and a limited number of venues each night, access to Festival events is on a first-come, first-served basis. Join us at the Oracle Music Festival--it's going to be epic! Save $500 on Registration with Early Bird Pricing Early Bird pricing ends July 13! Save up to $500 on registration fees by registering by Friday. Will you be attending Oracle OpenWorld 2012? We hope to see you there! Be sure to follow @oraclewebcenter on Twitter for more information and use hashtags #webcenter and #oow!

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • How to execute python script on the BaseHTTPSERVER created by python?

    - by user1731699
    I have simply created a python server with : python -m SimpleHTTPServer I had a .htaccess (I don't know if it is usefull with python server) with: AddHandler cgi-script .py Options +ExecCGI Now I am writing a simple python script : #!/usr/bin/python import cgitb cgitb.enable() print 'Content-type: text/html' print ''' <html> <head> <title>My website</title> </head> <body> <p>Here I am</p> </body> </html> ''' I make test.py (name of my script) an executed file with: chmod +x test.py I am launching in firefox with this addres: (http : //) 0.0.0.0:8000/test.py Problem, the script is not executed... I see the code in the web page... And server error is: localhost - - [25/Oct/2012 10:47:12] "GET / HTTP/1.1" 200 - localhost - - [25/Oct/2012 10:47:13] code 404, message File not found localhost - - [25/Oct/2012 10:47:13] "GET /favicon.ico HTTP/1.1" 404 - How can I manage the execution of python code simply? Is it possible to write in a python server to execute the python script like with something like that: import BaseHTTPServer import CGIHTTPServer httpd = BaseHTTPServer.HTTPServer(\ ('localhost', 8123), \ CGIHTTPServer.CGIHTTPRequestHandler) ###  here some code to say, hey please execute python script on the webserver... ;-) httpd.serve_forever() Or something else...

    Read the article

  • Stable reverse port forwarding in SSH and stale sessions

    - by Vi
    Using VPS to forward ports behind NAT: for((;;)) { ssh -R 2222:127.0.0.1:22 [email protected]; sleep 10; } When connection is broken somehow and it is reconnecting. Warning: remote port forwarding failed for listen port 2222 Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 I type: vi@vi-server:~$ killall sshd Connection to vi-server.org closed by remote host. Connection to vi-server.org closed. Linux vi-server.no-ip.org 2.6.18-92.1.13.el5.028stab059.3 #1 SMP Wed Oct 15 13:33:44 MSD 2008 i686 vi@vi-server:~$ Now it's OK. How it's simpler to make this automatic?

    Read the article

  • svn 503 error when commit new files

    - by philipp
    I am struggling with a strange error when I try to commit my repository. I have a V-server with webmin installed on it. Via Webmin I installed an svn module, created repositories and everything worked fine until three days ago. Trying to commit brings the following error: Commit failed (details follow): Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/!svn/wrk/a1f963a7-0a33-fa48-bfde-183ea06ab958/RLE/.htaccess' Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/RLE/.htaccess' I google everywhere and found only very few solutions. One indicated that a wrong error document is set, another one dealt about the problem that filenames might cause this error and last but not least a wrong proxy configuration in the local svn config could be the reason. After trying all of the solutions suggested I could not reach anything. Only after a server reboot there was a little difference in the error-message, telling me that the server was not able to move a temp file, because the operation was permitted. So I also controlled the permissions of the svn directory, but also with no success. An svn update than restored the "normal" error from above and nothing changed since then. The only change I made on the server, I guess that this could be the reason why svn does not work anymore, was to install the php5_mysql module for apache via apt-get install php5_mysql. Atg the moment I have totally no idea where I could search. I don't know if the problem is on my server or in my repository and I would be glad to get any hint to solve this. Thanks in advance Greetings philipp error log: [Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/d8dd436f-d014-f047-8e87-01baac46a593. [500, #0] Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] could not begin a transaction [500, #1] [Tue Oct 25 19:24:21 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4. [500, #0] http.conf: <Location /svn> DAV svn SVNParentPath /home/xxx/svn AuthType Basic AuthName xxx.de AuthUserFile /home/xxx/etc/svn.basic.passwd Require valid-user AuthzSVNAccessFile /home/xxx/etc/svn-access.conf Satisfy Any ErrorDocument 404 default RewriteEngine off </Location> The permissions for the repository directory are : rwxrwxrwx (0777). the directory /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4 does not exist on the server. I think this is part of the repository. So, I just just want to admit that i tried to reach the repository via Browser and i worked, I could see everything, so the error only occurs when I try to commit new files. I also created a second repository and tried to commit files in there, what gave me the same error.

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • List files with last access date in linux

    - by kayaker243
    I'd like to clean up a server that my webmaster let turn into a mess. I know how to list all files not accessed within the last x days using find and -atime, but what I'm looking for is to come up with a listing of the last access date for files one level down in directory /foo: /foo/bar1.txt Dec 11, 2001 /foo/bar2.txt Nov 12, 2008 /foo/bar3.txt Jan 12, 2004 For folders one level down in directory /foo, list the date of the most recently accessed file within the directory (no limit on depth for identifying last access date) /foo/bar1/ Feb 13, 2012 /foo/bar2/ Oct 11, 2008 Where /foo/bar1/ has a file modified Jan 1, 1998 and Feb 13, 2012 and /foo/bar2/ has 30 files, most recent of which was accessed Oct 11, 2008. This question is similar to: http://stackoverflow.com/questions/5566310/how-to-recursively-find-and-list-the-latest-modified-files-in-a-directory-with-s but rather than the modification date, the date of interest is the last accessed date.

    Read the article

  • ubuntu mail server settings and /etc/hosts file

    - by mbrc
    This is my /etc/hosts file 127.0.0.1 localhost.localdomain localhost 127.0.1.1 ubuntu-server.xx.com ubuntu-server 193.77.xx.xx mail.xx.com mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters is this correct configuration for my mail server. I am behind router so i don't know if is ok to use my IP for mail.xx.com and 127.0.0.1 for localhost problem is that i can receive mail but when i send it i get Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: SASL authentication failure: Password verification failed Oct 17 21:29:32 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL PLAIN authentication failed: authentication failure Oct 17 21:29:34 ubuntu-server postfix/smtpd[2453]: warning: my.router[192.168.1.1]: SASL LOGIN authentication failed: authentication failure EDIT: mabye is problem some port. i foward this ports. POP3 - port 110 IMAP - port 143 SMTP - port 25 HTTP - port 80 Secure SMTP (SSMTP) - port 465 Secure IMAP (IMAP4-SSL) - port 585 StartTLS - port 587 IMAP4 over SSL (IMAPS) - port 993 Secure POP3 (SSL-POP) - port 995 postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = amavis:[127.0.0.1]:10024 delay_warning_time = 4h disable_vrfy_command = yes inet_interfaces = all inet_protocols = all mailbox_size_limit = 0 maximal_backoff_time = 8000s maximal_queue_lifetime = 7d message_size_limit = 0 minimal_backoff_time = 1000s mydestination = myhostname = mail.xx.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mynetworks_style = host myorigin = /etc/mailname readme_directory = no receive_override_options = no_address_mappings recipient_delimiter = + relayhost = smtp_helo_timeout = 60s smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org smtpd_data_restrictions = reject_unauth_pipelining smtpd_delay_reject = yes smtpd_hard_error_limit = 12 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_limit = 16 smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit smtpd_soft_error_limit = 3 smtpd_tls_cert_file = /etc/ssl/private/mail.xx.com.crt smtpd_tls_key_file = /etc/ssl/private/mail.xx.com.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes unknown_local_recipient_reject_code = 450 virtual_alias_maps = mysql:/etc/postfix/maps/alias.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/spool/mail/virtual virtual_mailbox_domains = mysql:/etc/postfix/maps/domain.cf virtual_mailbox_limit = 0 virtual_mailbox_maps = mysql:/etc/postfix/maps/user.cf virtual_uid_maps = static:5000 saslfinger -c version: 1.0.4ostfix Cyrus sasl configuration Ä mode: client-side SMTP AUTH -- basics -- Postfix: 2.9.3 System: Ubuntu 12.04.1 LTS \n \l -- smtp is linked to -- libsasl2.so.2 => /usr/lib/i386-linux-gnu/libsasl2.so.2 (0x00d3a000) -- active SMTP AUTH and TLS parameters for smtp -- relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes -- listing of /usr/lib/sasl2 -- total 28 drwxr-xr-x 2 root root 4096 okt 14 15:18 . drwxr-xr-x 72 root root 12288 okt 14 15:03 .. -rw-r--r-- 1 root root 1 maj 4 06:17 berkeley_db.txt -rw-r----- 1 root root 701 okt 14 15:18 saslpasswd.conf -rw-r----- 1 smmta smmsp 885 okt 14 15:18 Sendmail.conf -- listing of /etc/postfix/sasl -- total 12 drwxr-xr-x 2 root root 4096 okt 11 18:55 . drwxr-xr-x 4 root root 4096 okt 12 06:59 .. -rwx------ 1 root root 241 okt 11 18:55 smtpd.conf Cannot find the smtp_sasl_password_maps parameter in main.cf. Client-side SMTP AUTH cannot work without this parameter!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >