Search Results

Search found 2936 results on 118 pages for 'logfile analysis'.

Page 36/118 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • How do I start the postgreSQL service upon boot?

    - by Homunculus Reticulli
    I am running PostgreSQL (v 8.4) on Ubuntu 10.0.4. The PG service currently starts on reboot (after I installed PG on my machine), however, I want the service to use a new data directory. Currently, after a reboot, I have to: Stop the currently running PG service manually type: /usr/local/pgsql/bin/pg_ctl start -D /my/preffered/data/directory -l /usr/local/pgsql/data/logfile Which file do I need to edit to ensure that I always have the service using the correct data folder?

    Read the article

  • Server 2003 SP2 BSOD caused by fltmgr.sys

    - by MasterMax1313
    I'm running into a problem where a Server 2003 SP2 box has started crashing roughly once an hour, BSODing out with the message that fltmgr.sys is probably the cause. I ran dumpchk.exe on the memory.dmp file, indicating the same thing. Any thoughts on typical root causes? The following is the error code I'm seeing: Error code 0000007e, parameter1 c0000005, parameter2 f723e087, parameter3 f78cea8c, parameter4 f78ce788. After running dumpchk on the memory.dmp file, I get the following note: Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) The full log is here: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [c:\windows\memory.dmp] Kernel Complete Dump File: Full address space is available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name: Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Loading Kernel Symbols ............................................................... ................................................. Loading User Symbols Loading unloaded module list ... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. *** ERROR: Symbol file could not be found. Defaulted to export symbols for fltmgr.sys - --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- ----- 32 bit Kernel Full Dump Analysis DUMP_HEADER32: MajorVersion 0000000f MinorVersion 00000ece KdSecondaryVersion 00000000 DirectoryTableBase 004e7000 PfnDataBase 81600000 PsLoadedModuleList 8089ffa8 PsActiveProcessHead 808a61c8 MachineImageType 0000014c NumberProcessors 00000001 BugCheckCode 0000007e BugCheckParameter1 c0000005 BugCheckParameter2 f723e087 BugCheckParameter3 f78dea8c BugCheckParameter4 f78de788 PaeEnabled 00000001 KdDebuggerDataBlock 8088e3e0 SecondaryDataState 00000000 ProductType 00000003 SuiteMask 00000110 Physical Memory Description: Number of runs: 3 (limited to 3) FileOffset Start Address Length 00001000 0000000000001000 0009e000 0009f000 0000000000100000 bfdf0000 bfe8f000 00000000bff00000 00100000 Last Page: 00000000bff8e000 00000000bffff000 KiProcessorBlock at 8089f300 1 KiProcessorBlock entries: ffdff120 Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name:*** ERROR: Module load completed but symbols could not be loaded for srv.sys Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 start end module name 80800000 80a50000 nt Tue Oct 19 10:00:49 2010 (4CBDA491) 80a50000 80a6f000 hal Sat Feb 17 00:48:25 2007 (45D69729) b83d4000 b83fe000 Fastfat Sat Feb 17 01:27:55 2007 (45D6A06B) b8476000 b84a1000 RDPWD Sat Feb 17 00:44:38 2007 (45D69646) b8549000 b8554000 TDTCP Sat Feb 17 00:44:32 2007 (45D69640) b8fe1000 b9045000 srv Thu Feb 17 11:58:17 2011 (4D5D53A9) b956d000 b95be000 HTTP Fri Nov 06 07:51:22 2009 (4AF41BCA) b9816000 b982d780 hgfs Tue Aug 12 20:36:54 2008 (48A22CA6) b9b16000 b9b20000 ndisuio Sat Feb 17 00:58:25 2007 (45D69981) b9cf6000 b9d1ac60 iwfsd Wed Sep 29 01:43:59 2004 (415A4B9F) b9e5b000 b9e62000 parvdm Tue Mar 25 03:03:49 2003 (3E7FFF55) b9e63000 b9e67860 lgtosync Fri Sep 12 04:38:13 2003 (3F6185F5) b9ed3000 b9ee8000 Cdfs Sat Feb 17 01:27:08 2007 (45D6A03C) b9f10000 b9f2e000 EraserUtilRebootDrv Thu Jul 07 21:45:11 2011 (4E166127) b9f2e000 b9f8c000 eeCtrl Thu Jul 07 21:45:11 2011 (4E166127) b9f8c000 b9f9d000 Fips Sat Feb 17 01:26:33 2007 (45D6A019) b9f9d000 ba013000 mrxsmb Fri Feb 18 10:22:23 2011 (4D5E8EAF) ba013000 ba043000 rdbss Wed Feb 24 10:54:03 2010 (4B854B9B) ba043000 ba0ad000 SPBBCDrv Mon Dec 14 23:39:00 2009 (4B2712E4) ba0ad000 ba0d7000 afd Thu Feb 10 08:42:18 2011 (4D53EB3A) ba0d7000 ba108000 netbt Sat Feb 17 01:28:57 2007 (45D6A0A9) ba108000 ba19c000 tcpip Sat Aug 15 05:53:38 2009 (4A8685A2) ba19c000 ba1b5000 ipsec Sat Feb 17 01:29:28 2007 (45D6A0C8) ba275000 ba288600 NAVENG Fri Jul 29 08:10:02 2011 (4E32A31A) ba289000 ba2ae000 SYMEVENT Thu Apr 15 21:31:23 2010 (4BC7BDEB) ba2ae000 ba42d300 NAVEX15 Fri Jul 29 08:07:28 2011 (4E32A280) ba42e000 ba479000 SRTSP Fri Mar 04 15:31:08 2011 (4D714C0C) ba485000 ba487b00 dump_vmscsi Wed Apr 11 13:55:32 2007 (461D2114) ba4e1000 ba540000 update Mon May 28 08:15:16 2007 (465AC7D4) ba568000 ba59f000 rdpdr Sat Feb 17 00:51:00 2007 (45D697C4) ba59f000 ba5b1000 raspptp Sat Feb 17 01:29:20 2007 (45D6A0C0) ba5b1000 ba5ca000 ndiswan Sat Feb 17 01:29:22 2007 (45D6A0C2) ba5da000 ba5e4000 dump_diskdump Sat Feb 17 01:07:44 2007 (45D69BB0) ba66a000 ba67e000 rasl2tp Sat Feb 17 01:29:02 2007 (45D6A0AE) ba67e000 ba69a000 VIDEOPRT Sat Feb 17 01:10:30 2007 (45D69C56) ba69a000 ba6c1000 ks Sat Feb 17 01:30:40 2007 (45D6A110) ba6c1000 ba6d5000 redbook Sat Feb 17 01:07:26 2007 (45D69B9E) ba6d5000 ba6ea000 cdrom Sat Feb 17 01:07:48 2007 (45D69BB4) ba6ea000 ba6ff000 serial Sat Feb 17 01:06:46 2007 (45D69B76) ba6ff000 ba717000 parport Sat Feb 17 01:06:42 2007 (45D69B72) ba717000 ba72a000 i8042prt Sat Feb 17 01:30:40 2007 (45D6A110) baff0000 baff3700 CmBatt Sat Feb 17 00:58:51 2007 (45D6999B) bf800000 bf9d3000 win32k Thu Mar 03 08:55:02 2011 (4D6F9DB6) bf9d3000 bf9ea000 dxg Sat Feb 17 01:14:39 2007 (45D69D4F) bf9ea000 bf9fec80 vmx_fb Sat Aug 16 07:23:10 2008 (48A6B89E) bf9ff000 bfa4a000 ATMFD Tue Feb 15 08:19:22 2011 (4D5A7D5A) bff60000 bff7e000 RDPDD Sat Feb 17 09:01:19 2007 (45D70AAF) f7214000 f723a000 KSecDD Mon Jun 15 13:45:11 2009 (4A3688A7) f723a000 f725f000 fltmgr Sat Feb 17 00:51:08 2007 (45D697CC) f725f000 f7272000 CLASSPNP Sat Feb 17 01:28:16 2007 (45D6A080) f7272000 f7283000 symmpi Mon Dec 13 16:03:14 2004 (41BE0392) f7283000 f72a2000 SCSIPORT Sat Feb 17 01:28:41 2007 (45D6A099) f72a2000 f72bf000 atapi Sat Feb 17 01:07:34 2007 (45D69BA6) f72bf000 f72e9000 volsnap Sat Feb 17 01:08:23 2007 (45D69BD7) f72e9000 f7315000 dmio Sat Feb 17 01:10:44 2007 (45D69C64) f7315000 f733c000 ftdisk Sat Feb 17 01:08:05 2007 (45D69BC5) f733c000 f7352000 pci Sat Feb 17 00:59:03 2007 (45D699A7) f7352000 f7386000 ACPI Sat Feb 17 00:58:47 2007 (45D69997) f7487000 f7490000 WMILIB Tue Mar 25 03:13:00 2003 (3E80017C) f7497000 f74a6000 isapnp Sat Feb 17 00:58:57 2007 (45D699A1) f74a7000 f74b4000 PCIIDEX Sat Feb 17 01:07:32 2007 (45D69BA4) f74b7000 f74c7000 MountMgr Sat Feb 17 01:05:35 2007 (45D69B2F) f74c7000 f74d2000 PartMgr Sat Feb 17 01:29:25 2007 (45D6A0C5) f74d7000 f74e7000 disk Sat Feb 17 01:07:51 2007 (45D69BB7) f74e7000 f74f3000 Dfs Sat Feb 17 00:51:17 2007 (45D697D5) f74f7000 f7501000 crcdisk Sat Feb 17 01:09:50 2007 (45D69C2E) f7507000 f7517000 agp440 Sat Feb 17 00:58:53 2007 (45D6999D) f7517000 f7522000 TDI Sat Feb 17 01:01:19 2007 (45D69A2F) f7527000 f7532000 ptilink Sat Feb 17 01:06:38 2007 (45D69B6E) f7537000 f7540000 raspti Sat Feb 17 00:59:23 2007 (45D699BB) f7547000 f7556000 termdd Sat Feb 17 00:44:32 2007 (45D69640) f7557000 f7561000 Dxapi Tue Mar 25 03:06:01 2003 (3E7FFFD9) f7577000 f7580000 mssmbios Sat Feb 17 00:59:12 2007 (45D699B0) f7587000 f7595000 NDProxy Wed Nov 03 09:25:59 2010 (4CD162E7) f75a7000 f75b1000 flpydisk Tue Mar 25 03:04:32 2003 (3E7FFF80) f75b7000 f75c0080 SRTSPX Fri Mar 04 15:31:24 2011 (4D714C1C) f75d7000 f75e3000 vga Sat Feb 17 01:10:30 2007 (45D69C56) f75e7000 f75f2000 Msfs Sat Feb 17 00:50:33 2007 (45D697A9) f75f7000 f7604000 Npfs Sat Feb 17 00:50:36 2007 (45D697AC) f7607000 f7615000 msgpc Sat Feb 17 00:58:37 2007 (45D6998D) f7617000 f7624000 netbios Sat Feb 17 00:58:29 2007 (45D69985) f7627000 f7634000 wanarp Sat Feb 17 00:59:17 2007 (45D699B5) f7637000 f7646000 intelppm Sat Feb 17 00:48:30 2007 (45D6972E) f7647000 f7652000 kbdclass Sat Feb 17 01:05:39 2007 (45D69B33) f7657000 f7661000 mouclass Tue Mar 25 03:03:09 2003 (3E7FFF2D) f7667000 f7671000 serenum Sat Feb 17 01:06:44 2007 (45D69B74) f7677000 f7682000 fdc Sat Feb 17 01:07:16 2007 (45D69B94) f7687000 f7694b00 vmx_svga Sat Aug 16 07:22:07 2008 (48A6B85F) f7697000 f76a0000 watchdog Sat Feb 17 01:11:45 2007 (45D69CA1) f76a7000 f76b0000 ndistapi Sat Feb 17 00:59:19 2007 (45D699B7) f76b7000 f76c6000 raspppoe Sat Feb 17 00:59:23 2007 (45D699BB) f76c8000 f7707000 NDIS Sat Feb 17 01:28:49 2007 (45D6A0A1) f7707000 f770f000 kdcom Tue Mar 25 03:08:00 2003 (3E800050) f770f000 f7717000 BOOTVID Tue Mar 25 03:07:58 2003 (3E80004E) f7717000 f771e000 intelide Sat Feb 17 01:07:32 2007 (45D69BA4) f771f000 f7726000 dmload Tue Mar 25 03:08:08 2003 (3E800058) f777f000 f7786000 dxgthk Tue Mar 25 03:05:52 2003 (3E7FFFD0) f7787000 f778e000 vmmemctl Tue Aug 12 20:37:25 2008 (48A22CC5) f77cf000 f77d6280 vmxnet Mon Sep 08 21:17:10 2008 (48C5CE96) f77d7000 f77df000 audstub Tue Mar 25 03:09:12 2003 (3E800098) f77ef000 f77f7000 Fs_Rec Tue Mar 25 03:08:36 2003 (3E800074) f77f7000 f77fe000 Null Tue Mar 25 03:03:05 2003 (3E7FFF29) f77ff000 f7806000 Beep Tue Mar 25 03:03:04 2003 (3E7FFF28) f7807000 f780f000 mnmdd Tue Mar 25 03:07:53 2003 (3E800049) f780f000 f7817000 RDPCDD Tue Mar 25 03:03:05 2003 (3E7FFF29) f7817000 f781f000 rasacd Tue Mar 25 03:11:50 2003 (3E800136) f7878000 f7897000 Mup Tue Apr 12 15:05:46 2011 (4DA4A28A) f7897000 f7899980 compbatt Sat Feb 17 00:58:51 2007 (45D6999B) f789b000 f789e900 BATTC Sat Feb 17 00:58:46 2007 (45D69996) f789f000 f78a1b00 vmscsi Wed Apr 11 13:55:32 2007 (461D2114) f79af000 f79b0280 vmmouse Mon Aug 11 07:16:51 2008 (48A01FA3) f79b1000 f79b2280 swenum Sat Feb 17 01:05:56 2007 (45D69B44) f7b4a000 f7bdf000 Ntfs Sat Feb 17 01:27:23 2007 (45D6A04B) Unloaded modules: ba65a000 ba668000 imapi.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 0000E000 ba1c4000 ba1d5000 vpc-8042.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00011000 f77df000 f77e7000 Sfloppy.SYS Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00008000 ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- Finished dump check

    Read the article

  • snmpd dead but subsys locked

    - by Hina NMS
    Hi folks I have an NMS and a Client machine. I want the client to send traps to the NMS. I have been configuring the snmpd.conf file testing if i disable a process do i receive an alert or not. For the changes to reflect that were made in the conf file i restarted the snmpd daemon each time. The testing was going fine. All of a sudden when i restarted snmpd i recieved the error msg "snmpd dead but subsys locked". I googled for answer as to what it actually meant and found out that when a service is started a logfile is created in the /var/lock/subsys. Sometimes if the service is not stopped properly or whatever the logfile remains created. Though i started/stopped the snmpd service properly it didnt go away so i removed the file manually (via rm cmd). when i checked the status the error "snmpd dead but subsys locked" was gone. On my NMS i recieved the alert of snmpd coldstart. i started the snmpd service everything goes fine! BUT after 5 mins again i recieve the same error message and this keeps on happening..what do i need to do now?

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • how do i write an init script for django-supervisor

    - by amateur
    pardon me as this is my first time attempting to write a init script for centos 5. I am using django + supervisor to manage my celery workers, scheduler. Now, this is my naive simple attempt /etc/init.d/supervisor #!/bin/sh # # /etc/rc.d/init.d/supervisord # # Supervisor is a client/server system that # allows its users to monitor and control a # number of processes on UNIX-like operating # systems. # # chkconfig: - 64 36 # description: Supervisor Server # processname: supervisord # Source init functions /home/foo/virtualenv/property_env/bin/python /home/foo/bar/manage.py supervisor --daemonize inside my supervisor.conf: [program:celerybeat] command=/home/property/virtualenv/property_env/bin/python manage.py celerybeat --loglevel=INFO --logfile=/home/property/property_buyer/logfiles/celerybeat.log [program:celeryd] command=/home/foo/virtualenv/property_env/bin/python manage.py celeryd --loglevel=DEBUG --logfile=/home/foo/bar/logfiles/celeryd.log --concurrency=1 -E [program:celerycam] command=/home/foo/virtualenv/property_env/bin/python manage.py celerycam I couldn't get it to work. 2013-08-06 00:21:03,108 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:06,114 INFO spawned: 'celeryd' with pid 11772 2013-08-06 00:21:06,116 INFO spawned: 'celerycam' with pid 11773 2013-08-06 00:21:06,119 INFO spawned: 'celerybeat' with pid 11774 2013-08-06 00:21:06,146 INFO exited: celerycam (exit status 2; not expected) 2013-08-06 00:21:06,147 INFO gave up: celerycam entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,147 INFO exited: celeryd (exit status 2; not expected) 2013-08-06 00:21:06,152 INFO gave up: celeryd entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,152 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:07,153 INFO gave up: celerybeat entered FATAL state, too many start retries too quickly I believe it is the init script, but please help me understand what is wrong.

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Design advice for avoiding change in several classes

    - by Anders Svensson
    Hi, I'm trying to figure out how to design a small application more elegantly, and make it more resistant to change. Basically it is a sort of project price calculator, and the problem is that there are many parameters that can affect the pricing. I'm trying to avoid cluttering the code with a lot of if-clauses for each parameter, but still I have e.g. if-clauses in two places checking for the value of the size parameter. I have the Head First Design Patterns book, and have tried to find ideas there, but the closest I got was the decorator pattern, which has an example where starbuzz coffee sets prices depending first on condiments added, and then later in an exercise by adding a size parameter (Tall, Grande, Venti). But that didn't seem to help, because adding that parameter still seemed to add if-clause complexity in a lot of places (and this being an exercise they didn't explain that further). What I am trying to avoid is having to change several classes if a parameter were to change or a new parameter added, or at least change in as few places as possible (there's some fancy design principle word for this that I don't rememeber :-)). Here below is the code. Basically it calculates the price for a project that has the tasks "Writing" and "Analysis" with a size parameter and different pricing models. There will be other parameters coming in later too, like "How new is the product?" (New, 1-5 years old, 6-10 years old), etc. Any advice on the best design would be greatly appreciated, whether a "design pattern" or just good object oriented principles that would make it resistant to change (e.g. adding another size, or changing one of the size values, and only have to change in one place rather than in several if-clauses): public class Project { private readonly int _numberOfProducts; protected Size _size; public Task Analysis { get; set; } public Task Writing { get; set; } public Project(int numberOfProducts) { _numberOfProducts = numberOfProducts; _size = GetSize(); Analysis = new AnalysisTask(numberOfProducts, _size); Writing = new WritingTask(numberOfProducts, _size); } private Size GetSize() { if (_numberOfProducts <= 2) return Size.small; if (_numberOfProducts <= 8) return Size.medium; return Size.large; } public double GetPrice() { return Analysis.GetPrice() + Writing.GetPrice(); } } public abstract class Task { protected readonly int _numberOfProducts; protected Size _size; protected double _pricePerHour; protected Dictionary<Size, int> _hours; public abstract int TotalHours { get; } public double Price { get; set; } protected Task(int numberOfProducts, Size size) { _numberOfProducts = numberOfProducts; _size = size; } public double GetPrice() { return _pricePerHour * TotalHours; } } public class AnalysisTask : Task { public AnalysisTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 850; _hours = new Dictionary<Size, int>() { { Size.small, 56 }, { Size.medium, 104 }, { Size.large, 200 } }; } public override int TotalHours { get { return _hours[_size]; } } } public class WritingTask : Task { public WritingTask(int numberOfProducts, Size size) : base(numberOfProducts, size) { _pricePerHour = 650; _hours = new Dictionary<Size, int>() { { Size.small, 125 }, { Size.medium, 100 }, { Size.large, 60 } }; } public override int TotalHours { get { if (_size == Size.small) return _hours[_size] * _numberOfProducts; if (_size == Size.medium) return (_hours[Size.small] * 2) + (_hours[Size.medium] * (_numberOfProducts - 2)); return (_hours[Size.small] * 2) + (_hours[Size.medium] * (8 - 2)) + (_hours[Size.large] * (_numberOfProducts - 8)); } } } public enum Size { small, medium, large } public partial class Form1 : Form { public Form1() { InitializeComponent(); List<int> quantities = new List<int>(); for (int i = 0; i < 100; i++) { quantities.Add(i); } comboBoxNumberOfProducts.DataSource = quantities; } private void comboBoxNumberOfProducts_SelectedIndexChanged(object sender, EventArgs e) { Project project = new Project((int)comboBoxNumberOfProducts.SelectedItem); labelPrice.Text = project.GetPrice().ToString(); labelWriterHours.Text = project.Writing.TotalHours.ToString(); labelAnalysisHours.Text = project.Analysis.TotalHours.ToString(); } } At the end is a simple current calling code in the change event for a combobox that set size... (BTW, I don't like the fact that I have to use several dots to get to the TotalHours at the end here either, as far as I can recall, that violates the "principle of least knowledge" or "the law of demeter", so input on that would be appreciated too, but it's not the main point of the question) Regards, Anders

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • batch copy files with error log on missing permissions

    - by sc911
    Hi *, I'm searching for a tool to batch-copy files, that should support the following points: copy files from a net-share report any errors show errors only or filter log on errors don't stop on an error also report if a file or a folder could not be copied due to missing permissions if possible it should have a queue where new job can be added while copying I tried the following tools: TerraCopy: takes a lot time to just calculate the time and the size of the job and does not report errors due to missing permissions (it doesn't even add those files to the copy-queue) Karne's replicator: does not report errors due to missing permissions xcopy: does a great job when using the right parameters and piping the output to a file (in the German localization xcopy /k /r /e /i /s /c /h SOURCE TARGET>LOGFILE 2>&1 will do the job. opening the logfile in IE will give you a great monitor). but quing jobs it not possible (ok, you can join them all in a batch-file, but you can not queue jobs while another one is running (hm, thinking of a batch-script that loops through a file with the source-target-config...)) to be continued Which tools do you use? Tell me! Thx sc911

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • SEO: Nested List vs List, Split Over Divs vs Definition List

    - by Jon P
    From an SEO perspective which, if any, is better: Option 1: Nested lists with h2 tags <ul id="mainpoints"> <li><h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </li> <li><h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> <li><h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </li> </ul> Option 2: Divs with h2 and lists <div id="mainpoints"> <div> <h2>Powerful Analysis</h2> <ul> <li>Charting and indicators</li> <li>Daily trading signals</li> <li>Company health checks</li> </ul> </div> <div> <h2>World Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> <div> <h2>Daily Market Data</h2> <ul> [List Items removed for brevity] </ul> </div> </div> Option 3: Definition List <dl id="mainpoints"> <dt>Powerful Analysis</dt> <dd>- Charting and indicators</dd> <dd>- Daily trading signals</dd> <dd>- Company health checks</dd> <dt>World Market Data</dt> [List Items removed for brevity] <dt>Daily Market Data</dt> [List Items removed for brevity] </dl> My instincts tell me that semanticaly the pure list options (1 & 3) are the best and that h2 may be more SEO friendly (1 & 2) which would point to option 1 as being the best option. I do love the lean makeup of the definition list but will I take an SEO hit by losing the h2 tags? Before anyone asks, h2 is not valid markup in a dt tag. Are my instincts right with a nested list being the way to go?

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • TechEd North America 2012 – Day 1 #msTechEd

    - by Marco Russo (SQLBI)
    Yesterday I and Alberto delivered the PreCon day about BISM Tabular in Analysis Services 2012. We received very good feedback and now I am looking forward to meet people that read our blogs and our books! Ping me on Twitter at @marcorus if you want to contact me during the conference. This is my schedule for the next few days: ·         Monday, June 11, 2012 o   10:30am-12:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   I will try to watch some sessions in the afternoon o   6:30pm-7:00pm I will be at the O’Reilly booth meeting book readers and doing some book signing ·         Tuesday, June 12, 2012 o   12:30pm-3:30pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) o   5:00pm-6:15pm I will attend the Alberto’s session DBI413 Many-to-Many Relationships in BISM Tabular (room S330E) o   6:15pm-9:00pm Community Night & Ask the Experts, we’ll discuss about Analysis Services, Tabular and Multidimensional! ·         Wednesday, June 13, 2012 o   11:15am-11:30am Don’t miss this special demo session at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone). I and Alberto will present Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) and you’re invited to guess the response time of DAX queries on a 4 billion rows table with many-to-many relationships before we run them! We’ll give away some 8GB USB key if you guess the right answer! o   12:30pm-1:00pm I and Alberto will have a book signing session at the TechEd Bookstore o   3:00pm-5:00pm I will be in the Technical Learning Center area, at the Breaktrough Insights (station #8) in the Database & Business Intelligence area (dedicated to SQL Server 2012) ·         Thursday, June 14, 2012 o   2:45pm-4:00pm I will deliver my DBI319 BISM: Multidimensional vs. Tabular breakthrough session in room S320A. I expect many questions here! And if you want to learn more about Analysis Services Tabular, we announced two more online sessions of our SSAS Tabular Workshop: ·         July 2-3, 2012 - SSAS Workshop Online - America's time zone ·         September 3-4, 2012 - SSAS Workshop Online - America's time zone Register now if you are interested, the early bird for the July session expires on June 19, 2012! I will also deliver a SSAS Workshop in Oslo (Norway) on August 27-28, 2012.  

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >