Search Results

Search found 5086 results on 204 pages for 'smtp permission'.

Page 194/204 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Having problems with high CPU usage and apparent memory leak of Exim

    - by Dancrumb
    I'm having problems with my server and am hoping you can help. The culprit appears to be exim. The CPU usage is consistently high and the memory usage trends up and up and up for no apparent reason (this is not a heavily used server). To demonstrate the issue, I ran the following: root@server [/var/log]# service exim restart; for iter in `seq 0 9`; do date; top -n1 | grep exim; sleep 10; done Shutting down exim: [ OK ] Shutting down spamd: [ OK ] Starting exim: [ OK ] Sun Jun 6 18:12:07 CDT 2010 62592 root 25 0 11400 6572 2356 R 51.5 1.3 0:00.92 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim Sun Jun 6 18:12:18 CDT 2010 62592 root 25 0 28768 23m 2356 R 57.4 4.6 0:06.75 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:28 CDT 2010 62592 root 25 0 36408 30m 2356 R 55.5 6.0 0:12.59 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:39 CDT 2010 62592 root 25 0 41396 35m 2356 R 53.5 7.0 0:18.35 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:12:49 CDT 2010 62592 root 25 0 45868 40m 2356 R 47.5 7.8 0:24.06 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:00 CDT 2010 62592 root 25 0 50056 44m 2356 R 55.3 8.6 0:29.84 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:10 CDT 2010 62592 root 25 0 53888 47m 2356 R 55.2 9.4 0:35.63 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:21 CDT 2010 62592 root 20 0 56920 50m 2356 R 55.3 9.9 0:41.15 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:31 CDT 2010 62592 root 25 0 60380 54m 2356 R 53.4 10.6 0:46.98 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim Sun Jun 6 18:13:42 CDT 2010 62592 root 22 0 63400 57m 2356 R 49.5 11.2 0:52.74 exim 62587 mailnull 18 0 7548 1212 792 S 0.0 0.2 0:00.00 exim 62588 root 18 0 7536 2052 1648 S 0.0 0.4 0:00.00 exim After some time, it gets to a rate of picking up an extra MB every 10s. I've checked the exim logs and there are no messages coming in there. exim -bV shows: Exim version 4.69 #1 built 16-Mar-2009 14:44:43 Copyright (c) University of Cambridge 2006 Berkeley DB: Sleepycat Software: Berkeley DB 4.2.52: (February 22, 2005) Support for: crypteq iconv() IPv6 PAM Perl OpenSSL Content_Scanning Old_Demime Experimental_SPF Experimental_SRS Experimental_DomainKeys Lookups: lsearch wildlsearch nwildlsearch iplsearch dbm dbmnz passwd Authenticators: cram_md5 dovecot plaintext spa Routers: accept dnslookup ipliteral manualroute queryprogram redirect Transports: appendfile/maildir autoreply pipe smtp Size of off_t: 8 Configuration file is /etc/exim.conf I'm at something of a loss as to how to proceed. Any recommendations would be well received!

    Read the article

  • Cannot SSH to ubuntu server - openssh server owner changed

    - by Kshitiz Shankar
    I am using suPHP with Apache for virtual hosting but somewhere down the line my root ssh access is getting screwed up. I haven't been able to figure out why it is happening but eventually, my root user is not able to ssh to the server. I get this error: *** invalid open call: O_CREAT without mode ***: sshd: root@pts/3 terminated ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f12fe871817] /lib/x86_64-linux-gnu/libc.so.6(+0xeb7e1)[0x7f12fe8527e1] sshd: root@pts/3[0x41a542] sshd: root@pts/3[0x41a9eb] sshd: root@pts/3[0x41aeb8] sshd: root@pts/3[0x409630] sshd: root@pts/3[0x40f9ed] sshd: root@pts/3[0x410dd6] sshd: root@pts/3[0x411994] sshd: root@pts/3[0x411f16] sshd: root@pts/3[0x40b253] sshd: root@pts/3[0x42be24] sshd: root@pts/3[0x40c9cb] sshd: root@pts/3[0x412199] sshd: root@pts/3[0x4061a2] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f12fe78876d] sshd: root@pts/3[0x407635] ======= Memory map: ======== 00400000-00448000 r-xp 00000000 ca:02 4554758 /usr/sbin/sshd 00647000-00648000 r--p 00047000 ca:02 4554758 /usr/sbin/sshd 00648000-00649000 rw-p 00048000 ca:02 4554758 /usr/sbin/sshd 00649000-00750000 rw-p 00000000 00:00 0 01794000-017b5000 rw-p 00000000 00:00 0 [heap] 7f12fd5ad000-7f12fd5c2000 r-xp 00000000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd5c2000-7f12fd7c1000 ---p 00015000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c1000-7f12fd7c2000 r--p 00014000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c2000-7f12fd7c3000 rw-p 00015000 ca:02 3489844 /lib/x86_64-linux-gnu/libgcc_s.so.1 7f12fd7c3000-7f12fd7db000 r-xp 00000000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd7db000-7f12fd9db000 ---p 00018000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9db000-7f12fd9dc000 r--p 00018000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9dc000-7f12fd9dd000 rw-p 00019000 ca:02 3489977 /lib/x86_64-linux-gnu/libresolv-2.15.so 7f12fd9dd000-7f12fd9df000 rw-p 00000000 00:00 0 7f12fd9df000-7f12fd9e6000 r-xp 00000000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fd9e6000-7f12fdbe5000 ---p 00007000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe5000-7f12fdbe6000 r--p 00006000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe6000-7f12fdbe7000 rw-p 00007000 ca:02 3489994 /lib/x86_64-linux-gnu/libnss_dns-2.15.so 7f12fdbe7000-7f12fdd27000 rw-s 00000000 00:04 6167294 /dev/zero (deleted) 7f12fdd27000-7f12fdd33000 r-xp 00000000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdd33000-7f12fdf32000 ---p 0000c000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf32000-7f12fdf33000 r--p 0000b000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf33000-7f12fdf34000 rw-p 0000c000 ca:02 3489984 /lib/x86_64-linux-gnu/libnss_files-2.15.so 7f12fdf34000-7f12fdf3e000 r-xp 00000000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fdf3e000-7f12fe13e000 ---p 0000a000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe13e000-7f12fe13f000 r--p 0000a000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe13f000-7f12fe140000 rw-p 0000b000 ca:02 3489979 /lib/x86_64-linux-gnu/libnss_nis-2.15.so 7f12fe140000-7f12fe157000 r-xp 00000000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe157000-7f12fe356000 ---p 00017000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe356000-7f12fe357000 r--p 00016000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe357000-7f12fe358000 rw-p 00017000 ca:02 3489996 /lib/x86_64-linux-gnu/libnsl-2.15.so 7f12fe358000-7f12fe35a000 rw-p 00000000 00:00 0 7f12fe35a000-7f12fe362000 r-xp 00000000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe362000-7f12fe561000 ---p 00008000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe561000-7f12fe562000 r--p 00007000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe562000-7f12fe563000 rw-p 00008000 ca:02 3489985 /lib/x86_64-linux-gnu/libnss_compat-2.15.so 7f12fe563000-7f12fe565000 r-xp 00000000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe565000-7f12fe765000 ---p 00002000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe765000-7f12fe766000 r--p 00002000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe766000-7f12fe767000 rw-p 00003000 ca:02 3489886 /lib/x86_64-linux-gnu/libdl-2.15.so 7f12fe767000-7f12fe91c000 r-xp 00000000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12fe91c000-7f12feb1b000 ---p 001b5000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb1b000-7f12feb1f000 r--p 001b4000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb1f000-7f12feb21000 rw-p 001b8000 ca:02 3489888 /lib/x86_64-linux-gnu/libc-2.15.so 7f12feb21000-7f12feb26000 rw-p 00000000 00:00 0 7f12feb26000-7f12feb2f000 r-xp 00000000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12feb2f000-7f12fed2f000 ---p 00009000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed2f000-7f12fed30000 r--p 00009000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed30000-7f12fed31000 rw-p 0000a000 ca:02 3489983 /lib/x86_64-linux-gnu/libcrypt-2.15.so 7f12fed31000-7f12fed5f000 rw-p 00000000 00:00 0 7f12fed5f000-7f12fef10000 r-xp 00000000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12fef10000-7f12ff110000 ---p 001b1000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff110000-7f12ff12b000 r--p 001b1000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff12b000-7f12ff136000 rw-p 001cc000 ca:02 3489831 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 7f12ff136000-7f12ff13a000 rw-p 00000000 00:00 0 7f12ff13a000-7f12ff150000 r-xp 00000000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 7f12ff150000-7f12ff34f000 ---p 00016000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 7f12ff34f000-7f12ff350000 r--p 00015000 ca:02 3490020 /lib/x86_64-linux-gnu/libz.so.1.2.3.4Connection to stageserver.dockphp.com closed. After some debugging, I was able to narrow it down to a few things. For some reason the sshd daemon is running as root:www-data (apache user) instead of root. My ftp connection works but ssh over terminal fails. I have no idea whether it is getting caused due to suPHP or not (because that is the only place where user permission's etc. change). I really need to narrow it down and fix it asap. Thanks a lot!

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How should I ask for help in getting my emails to stop bouncing?

    - by Gregg Williams
    For several months, people have been telling me that emails they sent to me have been bouncing back, marked as undeliverable. The bounce message would contain portions like this: Final-Recipient: rfc822;[email protected] Action: failed Status: 5.7.1 Diagnostic-Code: smtp;550 5.7.1 <[email protected]>... Recipient declines email from 69.64.159.2, <spamhaus-xbl>, Ref: http://www.spamhaus.org/query/bl?ip=69.64.159.2 Clicking the link on the last line, the destination page told me that "this IP address is infected with/emitting spamware/spamtrojan traffic and needs to be fixed." I could temporarily de-list this node by clicking a link on that page, but it would get back on the list and more emails to me to bounce. I own a domain, innerpaths.net, and I normally use [email protected] for my email. I have my domain registrar, namecheap.com, forward all email from innerpaths.net to the email account [email protected]. (BTW, I had this same problem at a former registrar. I changed registrars, hoping that would fix the problem. It didn't.) Trying to isolate the problem, I asked namecheap.com what I should do. Their answer, though substantial, left me scratching my head: We have received feedback from our upstream provider which informed us that the mail server that you are trying to email subscribes to a 3rd party blacklist service which they appear to be listed on at the present time and is causing destination mail server to reject the messages. Being blocked with one of these services can happen to anyone for many reasons and is something that is beyond our control. 3rd party blacklist services require companies whose mail servers they have blacklisted, pay fees in order to be removed from their lists. As we cannot pay fees to blacklist services which require them for removal, you should contact your email provider and have them whitelist our mail server IP address: 69.64.157.73. My best guess is that I should email my ISP, sonic.net, tell them what is going on and ask them to whitelist the IP address 69.64.157.73. (If not, please let me know.) But I want to know what is going on and how email works. I understand that there's a device at location 69.64.159.2 that is doing something bad that causes the "destination mail server [sonic.net's, I assume --gw] to reject the messages." I know that email is sent through multiple devices in a way that eventually gets it to its destination. Beyond that, here are my questions: 1) I thought the Internet "routed around damage." Why does email starting at namecheap.com always (or is it 'sometimes'?) go through 69.64.159.2? 2) Who is the "upstream provider" that the namecheap.com representative mentions, and what is their role? 3) How does having sonic.net's whitelisting namecheap.com's mail server prevent my email being bounced by 69.64.159.2? I've searched the Internet for answers but have found nothing useful. Thanks for whatever answers you can provide.

    Read the article

  • CodePlex Daily Summary for Monday, July 01, 2013

    CodePlex Daily Summary for Monday, July 01, 2013Popular ReleasesQuickMon: Version 2.10.3: Mainly just a service release - no major changes. Toolbar buttons on main and config window can now be re-arrange (using ALT key) Added property to disable corrective scriptsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Roadkill - .NET Wiki engine: Roadkill v1.7: New features in 1.7: New file manager: Multiple file uploads Drag and drop uploads Delete folders (admins only) Delete files (admins only) (Experimental) Syntaxhighlighting custom variable (using https://github.com/alexgorbatchev/SyntaxHighlighter) - use [[[code lang=c#|your code here]]] (Experimental) MathJax custom variable - use [[[Mathjax]]] and $$your tex$$ on the page. Improved black bar theme Site speed improvements for Javascript/CSS files - now just two files files ea...Download Sharepoint Solution package: Release 4: version updated for SP2013WinRT XAML Toolkit: WinRT XAML Toolkit - 1.5: WinRT XAML Toolkit based on the Windows 8.0 and 8.1 Preview SDKs. Do not download the source code from here if you are looking for latest updates! You can download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Composition library for visual tree rende...Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsNew ProjectsAerCloud.net Client - Java, Linux & Windows: This project source code provides a step by step guide for using AerCloud.net Framework as a Service API. For more information please visit http://www.aercloudAmiClient – Asterisk Manager Interface (AMI) client based on the Rx Framework: Asterisk Manager Interface (AMI) client based on the Rx Frameworkbaidupan: cdcddddC#??????: C#??????ImageHelper: imagehelperIP switcher: IP switcher is a simple tool for switching settings, and store presets, on networkadapters.MastersProject: A MS project with a goal of creating a fully Code Contracts verified physics engine and a relatively simple game that uses it.Multiplatform card game: Example multipatform project.PhoneTools: A collection of tools designed to help developers create beautiful Windows Phone 8 apps.rodidexter: lllSharePoint 2013 List Item Encryption: This coding exercise project enables you to encrypt/decrypt list item text field in the browser using industry standard algorithms.tvaSoft: simulation, rotor dynamics, Finite Element Analisys, FEM, ODE, torsional vibration, flexural vibrationX3DML Project: X3DML is an xml-based markup language that defines rules for modeling 3D scenes from a tag-based document. It may be usefull in 3D web design and VR.zhuang-tfs: zhuang tfs

    Read the article

  • Cannot read status the monit daemon, even with allowed group

    - by jefflunt
    I cannot seem to get monit status or other CLI commands to work. I've built monit v5.8 to run on a Raspberry Pi. I'm able to add services to be monitored, and the web interface can be accessed just fine, as I've set it up for public read-only access (it's a test server, not my final production setup, so not a big deal right now). Problem is, when I run monit status while logged in as root I get: # monit status monit: cannot read status from the monit daemon I also have monit started on boot via this /etc/inittab file entry: mo:2345:respawn:/usr/local/bin/monit -Ic /etc/monitrc I've verified that monit is running, and I'm getting email alerts anytime I either kill the monit process manually, or reboot my raspberry pi. So, next I check my monitrc file permissions to see which group is allowed access. # ls -al /etc/monitrc -rw------- 1 root root 2359 Aug 24 14:48 /etc/monitrc Here's my relevant allow section of the control file. set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 Also tried setting permissions on this file to 640 to allow group read permissions, but no matter what I try I either get the same error as noted above, or when the permissions are set to 640 I get: # monit status monit: The control file '/etc/monitrc' must have permissions no more than -rwx------ (0700); right now permissions are -rw-r----- (0640). What am I missing here? I know that the httpd must be enabled, as that's the interface that the CLI uses to get information (or so I've read), so I've done that. And in terms of monit doing its monitoring job and sending email alerts, that's all working as well. Here's my entire monitrc file - again, this is version v5.8, and it was build with both PAM and SSL support. The process runs under the root user: # Global settings set daemon 300 with start delay 5 set logfile /var/log/monit.log set pidfile /var/run/monit.pid set idfile /var/run/.monit.id set statefile /var/run/.monit.state # Mail alerts ## Set the list of mail servers for alert delivery. Multiple servers may be ## specified using a comma separator. If the first mail server fails, Monit # will use the second mail server in the list and so on. By default Monit uses # port 25 - it is possible to override this with the PORT option. # set mailserver smtp.gmail.com port 587 username [omitted] password [omitted] using tlsv1 ## Send status and events to M/Monit (for more informations about M/Monit ## see http://mmonit.com/). By default Monit registers credentials with ## M/Monit so M/Monit can smoothly communicate back to Monit and you don't ## have to register Monit credentials manually in M/Monit. It is possible to ## disable credential registration using the commented out option below. ## Though, if safety is a concern we recommend instead using https when ## communicating with M/Monit and send credentials encrypted. # # set mmonit http://monit:[email protected]:8080/collector # # and register without credentials # Don't register credentials # # ## Monit by default uses the following format for alerts if the the mail-format ## statement is missing:: set mail-format { from: [email protected] subject: $SERVICE $DESCRIPTION message: $EVENT Service: $SERVICE Date: $DATE Action: $ACTION Host: $HOST Description: $DESCRIPTION Monit instance provided by chicagomeshnet.com } # Web status page set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 ## You can set alert recipients whom will receive alerts if/when a ## service defined in this file has errors. Alerts may be restricted on ## events by using a filter as in the second example below.

    Read the article

  • Network Access: I can't access 192.168.1.101 from 192.168.1.102.

    - by takpar
    Hi, I'm running Ubuntu 10.04 on my PC with IP 192.168.1.101. every thing work fine, e.g. my web server is running and I can see http://localhost/ or http://192.168.1.101 properly. But the problem is that I cannot see my PC from my laptop at 192.168.1.102 e.g. at my laptop http://192.168.1.101 gives Connection timed out in browser. or trying to telnet on any port leads to: telnet: Unable to connect to remote host: Connection timed out laptop is running a fresh install of Ubuntu as well and there is no setup for firewall stuff in both computers. PS: Both computers can ping each other well. The router is a cicso linksys wireless ADSL modem. Currently, I can connect to FTP server on the Windows running on 192.168.1.102 from 192.168.1.101 without problem. Theses are commands ran on my PC, 192.168.1.101: ifconfig: adp@adp-desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:e1:8e:cf inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::226:18ff:fee1:8ecf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1831935 errors:0 dropped:0 overruns:0 frame:0 TX packets:1493786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1996855925 (1.9 GB) TX bytes:215288238 (215.2 MB) Interrupt:27 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:951742 errors:0 dropped:0 overruns:0 frame:0 TX packets:951742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:494351095 (494.3 MB) TX bytes:494351095 (494.3 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:46:c0:00:01 inet addr:192.168.91.1 Bcast:192.168.91.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:46:c0:00:08 inet addr:192.168.156.1 Bcast:192.168.156.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) port 80 is set to 0.0.0.0 well: adp@adp-desktop:~$ netstat -ln | grep 'LISTEN ' tcp 0 0 127.0.0.1:52815 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4559 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:7634 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN tcp 0 0 127.0.1.1:7777 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:33601 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp6 0 0 :::139 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 :::445 :::* LISTEN /etc/hosts.deny is empty: adp@adp-desktop:~$ cat /etc/hosts.deny # /etc/hosts.deny: list of hosts that are _not_ allowed to access the system. # See the manual pages hosts_access(5) and hosts_options(5). # # Example: ALL: some.host.name, .some.domain # ALL EXCEPT in.fingerd: other.host.name, .other.domain # # If you're going to protect the portmapper use the name "portmap" for the # daemon name. Remember that you can only use the keyword "ALL" and IP # addresses (NOT host or domain names) for the portmapper, as well as for # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8) # for further information. # # The PARANOID wildcard matches any host whose name does not match its # address. # # You may wish to enable this to ensure any programs that don't # validate looked up hostnames still leave understandable logs. In past # versions of Debian this has been the default. # ALL: PARANOID netstat -l: adp@adp-desktop:~$ netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:52815 *:* LISTEN tcp 0 0 *:hylafax *:* LISTEN tcp 0 0 *:www *:* LISTEN tcp 0 0 *:4369 *:* LISTEN tcp 0 0 localhost:7634 *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:xmpp-server *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 *:5280 *:* LISTEN tcp 0 0 adp-desktop:7777 *:* LISTEN tcp 0 0 *:33601 *:* LISTEN tcp 0 0 *:xmpp-client *:* LISTEN tcp 0 0 localhost:mysql *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:mdns *:* udp 0 0 *:47467 *:* udp 0 0 192.168.1.10:netbios-ns *:* udp 0 0 192.168.91.1:netbios-ns *:* udp 0 0 192.168.156.:netbios-ns *:* udp 0 0 *:netbios-ns *:* udp 0 0 192.168.1.1:netbios-dgm *:* udp 0 0 192.168.91.:netbios-dgm *:* udp 0 0 192.168.156:netbios-dgm *:* udp 0 0 *:netbios-dgm *:* raw 0 0 *:icmp *:* 7 netstat -rn: adp@adp-desktop:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.91.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1 192.168.156.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 commands on the laptop, 192.168.1.102: ifconfig: root@fakeuser-laptop:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:33:a2:31:15 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 eth1 Link encap:Ethernet HWaddr 00:2d:d9:3e:1f:6c inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::21d:d9ff:fe3e:1f6c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5681 errors:0 dropped:0 overruns:0 frame:10313 TX packets:6717 errors:6 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4055251 (4.0 MB) TX bytes:779308 (779.3 KB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:206 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15172 (15.1 KB) TX bytes:15172 (15.1 KB) netstat -rn: root@fakeuser-laptop:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Why does a bash-zenity script has that title on Unity Panel and that icon on Unity Launcher?

    - by Sadi
    I have this small bash script which helps use Infinality font rendering options via a more user-friendly Zenity window. But whenever I launch it I have this "Color Picker" title on Unity Panel together with the icon assigned for "Color Picker" utility. I wonder why and how this is happening and how I can change it? #!/bin/bash # A simple script to provide a basic, zenity-based GUI to change Infinality Style. # v.1.2 # infinality_current=`cat /etc/profile.d/infinality-settings.sh | grep "USE_STYLE=" | awk -F'"' '{print $2}'` sudo_password="$( gksudo --print-pass --message 'Provide permission to make system changes: Enter your password to start or press Cancel to quit.' -- : 2>/dev/null )" # Check for null entry or cancellation. if [[ ${?} != 0 || -z ${sudo_password} ]] then # Add a zenity message here if you want. exit 4 fi # Check that the password is valid. if ! sudo -kSp '' [ 1 ] <<<"${sudo_password}" 2>/dev/null then # Add a zenity message here if you want. exit 4 fi # menu(){ im="zenity --width=500 --height=490 --list --radiolist --title=\"Change Infinality Style\" --text=\"Current <i>Infinality Style</i> is\: <b>$infinality_current</b>\n? To <i>change</i> it, select any other option below and press <b>OK</b>\n? To <i>quit without changing</i>, press <b>Cancel</b>\" " im=$im" --column=\" \" --column \"Options\" --column \"Description\" " im=$im"FALSE \"DEFAULT\" \"Use default settings - a compromise that should please most people\" " im=$im"FALSE \"OSX\" \"Simulate OSX rendering\" " im=$im"FALSE \"IPAD\" \"Simulate iPad rendering\" " im=$im"FALSE \"UBUNTU\" \"Simulate Ubuntu rendering\" " im=$im"FALSE \"LINUX\" \"Generic Linux style - no snapping or certain other tweaks\" " im=$im"FALSE \"WINDOWS\" \"Simulate Windows rendering\" " im=$im"FALSE \"WIN7\" \"Simulate Windows 7 rendering with normal glyphs\" " im=$im"FALSE \"WINLIGHT\" \"Simulate Windows 7 rendering with lighter glyphs\" " im=$im"FALSE \"VANILLA\" \"Just subpixel hinting\" " im=$im"FALSE \"CLASSIC\" \"Infinality rendering circa 2010 - No snapping.\" " im=$im"FALSE \"NUDGE\" \"Infinality - Classic with lightly stem snapping and tweaks\" " im=$im"FALSE \"PUSH\" \"Infinality - Classic with medium stem snapping and tweaks\" " im=$im"FALSE \"SHOVE\" \"Infinality - Full stem snapping and tweaks without sharpening\" " im=$im"FALSE \"SHARPENED\" \"Infinality - Full stem snapping, tweaks, and Windows-style sharpening\" " im=$im"FALSE \"INFINALITY\" \"Infinality - Standard\" " im=$im"FALSE \"DISABLED\" \"Act without extra infinality enhancements - just subpixel hinting\" " } # option(){ choice=`echo $im | sh -` # if echo $choice | grep "DEFAULT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DEFAULT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "OSX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"OSX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "IPAD" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"IPAD\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "UBUNTU" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"UBUNTU\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "LINUX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"LINUX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINDOWS" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WIN7" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINLIGHT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7LIGHT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "VANILLA" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"VANILLA\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "CLASSIC" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"CLASSIC\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "NUDGE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"NUDGE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "PUSH" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"PUSH\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHOVE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHOVE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHARPENED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHARPENED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "INFINALITY" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"INFINALITY\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "DISABLED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DISABLED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # } # menu option # if test ${#choice} -gt 0; then echo "Operation completed" fi # exit 0

    Read the article

  • Disable error_log. Error_log flooding

    - by user36646
    Hello, i got an webserver running and old version of gambio (xt:commerce fork). The error_log in the dir over the public_html is flooding with errors. About 30mb in 15min. How can I disable this log? I can't fix all the errors. Here are a few examples of the errors: [warn] mod_fcgid: stderr: PHP Notice: Undefined variable: key in /usr/www/users/foo//includes/classes/class.inputfilter.php on line 98 [warn] mod_fcgid: stderr: PHP Notice: Undefined index: in /usr/www/users/foo/templ [warn] mod_fcgid: stderr: in /usr/www/users/foo/templates/gambio/source/inc/xtc_show_category_sectionc.inc.php on line 47 They are all errors of: "mod_fcgid: stderr". I tried to grep "error_log" and "error_report" in the public html dir, but i did not find anything. Here is a part from the phpinfo(): PHP Version 4.4.9 System Linux foobar.com 2.6.26-2-686-bigmem #1 SMP Sat Dec 26 09:26:36 UTC 2009 i686 Build Date Feb 11 2010 13:00:33 Configure Command './configure' '--prefix=/usr/local/php4' '--with-config-file-path=/etc/php4/cgi' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-tiff-dir' '--with-ttf' '--enable-force-cgi-redirect' '--enable-safe-mode' '--with-zlib' '--enable-ftp' '--enable-url-includes' '--enable-gd-native-ttf' '--enable-trans-sid' '--enable-dbase' '--with-db4' '--with-ldap' '--enable-bcmath' '--enable-calendar' '--enable-memory-limit' '--with-mcal=/usr' '--with-bz2' '--with-mod-dav' '--enable-sockets' '--with-kerberos' '--with-imap-ssl' '--enable-gd-imgstrttf' '--with-freetype-dir' '--with-curl' '--with-mysql' '--with-mhash' '--with-gdbm' '--with-pgsql' '--with-gettext' '--with-xml' '--with-mcrypt' '--with-openssl' '--with-dom' '--without-pear' '--enable-exif' '--with-zip' '--enable-wddx' '--disable-cli' '--enable-fastcgi' '--with-imap' '--enable-xslt' '--with-xslt-sablot=/usr/local/lib' '--enable-mbstring' '--with-dom-xslt' '--with-dom-exslt' Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /home/httpd/php-ini/foo/php.ini PHP API 20020918 PHP Extension 20020429 Zend Extension 20050606 Debug Build no Zend Memory Manager enabled Thread Safety disabled Registered PHP Streams php, http, ftp, https, ftps, compress.bzip2, compress.zlib **Configuration PHP Core** Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output & & asp_tags Off Off auto_append_file no value no value auto_prepend_file no value no value browscap no value no value default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors Off Off doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 2039 2039 expose_php On On extension_dir /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 /usr/local/php4/lib/php/extensions/no-debug-non-zts-20020429 file_uploads On On gpc_order GPC GPC highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .:/usr/local/lib/php/ .:/usr/local/lib/php/ log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc On On magic_quotes_runtime Off Off magic_quotes_sybase Off Off max_execution_time 120 120 max_input_nesting_level 500 500 max_input_time -1 -1 memory_limit 128000000 128000000 open_basedir /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ /usr/www/users/foo:/usr/home/foo:/tmp:/usr/local/lib/php:/usr/local/rmagic:/usr/www/users/he/_system_ output_buffering no value no value output_handler no value no value post_max_size 128000000 128000000 precision 14 14 register_argc_argv On On register_globals Off Off report_memleaks On On safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path /usr/sbin/sendmail -t /usr/sbin/sendmail -t serialize_precision 100 100 short_open_tag On On SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128000000 128000000 upload_tmp_dir /usr/foo/foo/.tmp /usr/foo/.tmp user_dir no value no value variables_order EGPCS EGPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance Off Off

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • CodePlex Daily Summary for Tuesday, July 02, 2013

    CodePlex Daily Summary for Tuesday, July 02, 2013Popular ReleasesMastersign.Expressions: Mastersign.Expressions v0.4.2: added support for if(<cond>, <true-part>, <false-part>) fixed multithreading issue with rand() improved demo applicationNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.6 Rel0: v2.3.6 Is now DNN6 and DNN7 compatible Important : During update this install with overwrite the menu.xml setting, if you have changed this then make a backup before you upgrade and reapply your changes after the upgrade. Please view the following documentation if you are installing and configuring this module for the first time System Requirements Skill requirements Downloads and documents Step by step guide to a working store Please ask all questions in the Discussions tab. Document.Editor: 2013.26: What's new for Document.Editor 2013.26: New Insert Chart Improved User Interface Minor Bug Fix's, improvements and speed upsWsus Package Publisher: Release V1.2.1307.01: Fix an issue in the UI, approvals are not shown correctly in the 'Report' tabDirectX Tool Kit: July 2013: July 1, 2013 VS 2013 Preview projects added and updates for DirectXMath 3.05 vectorcall Added use of sRGB WIC metadata for JPEG, PNG, and TIFF SaveToWIC functions updated with new optional setCustomProps parameter and error check with optional targetFormatCore Server 2012 Powershell Script Hyper-v Manager: new_root.zip: Verison 1.0JSON Toolkit: JSON Toolkit 4.1.736: Improved strinfigy performance New serializing feature New anonymous type support in constructorsDotNetNuke® IFrame: IFrame 04.05.00: New DNN6/7 Manifest file and Azure Compatibility.VidCoder: 1.5.2 Beta: Fixed crash on presets with an invalid bitrate.Gardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configTwitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...New ProjectsALM Rangers DevOps Tooling and Guidance: Practical tooling and guidance that will enable teams to realize a faster deployment based on continuous feedback.Core Server 2012 Powershell Script Hyper-v Manager: Free core Server 2012 powershell scripts and batch files that replace the non-existent hyper-v manager, vmconnect and mstsc.Enhanced Deployment Service (EDS): EDS is a web service based utility designed to extend the deployment capabilities of administrators with the Microsoft Deployment Toolkit.ExtendedDialogBox: Libreria DialogBoxJazdy: This project is here only because we wanted to take advantage of a public git server.Mon Examen: This web interface is meant to make examinationsneet: summaryOrchard Multi-Choice Voting: A multiple choice voting Orchard module.Particle Swarm Optimization Solving Quadratic Assignment Problem: This project is submitted for the solving of QAP using PSO algorithms with addition of some modification Porjects: 23123123PPL Power Pack: PPL Power PackProperty Builder: Visual Studio tool for speeding up process of coding class properties getters and setters.RedRuler for Redline: I tried some on-screen rulers, none of them help me measure the UI element quickly based on the Redline. So I decided to created this handy RedRuler tool. Royale Living: Mahindra Royale Community PortalSearch and booking Hotel or Tours: Ð? án nghiên c?u c?a sinh viên tdt theo mô hình mvc 4SystemBuilder.Show: This tool is a helper after you create your project in visual studio to create the respective objects and interface. TalentDesk: new ptojectTcmplex: The Training Center teaches many different kind of course such as English, French, Computer hardware and computer softwareTFS Reporting Guide: Provides guidance and samples to enable TFS users to generate reports based on WIT data.Umbraco AdaptiveImages: Adaptive Images Package for UmbracoVirtualNet - A ILcode interpreter/emulator written in C++/Assembly: VirtualNet is a interpreter/emulator for running .net code in native without having to install the .Net FrameWorkVisual Blocks: Visual Blocks ????IDE ????? ??????? ????? ????/?? Visual Studio and Cloud Based Mobile Device Testing: Practical guidance enabling field to remove blockers to adoption and to use and extend the Perfecto Mobile Cloud Device testing within the context of VS.Windows 8 Time Picker for Windows Phone: A Windows Phone implementation of the Time Picker control found in the Windows 8.1 Alarms app.???? - SmallBasic?: ?????????

    Read the article

  • Adding a DLL to the GAC in Windows 7

    - by Jim Giercyk
    I recently created a DLL and I wanted to reference it from a project I was developing in Visual Studio.  In previous versions of Windows, doing so was simply a matter of dropping the DLL file in the C:\Windows\assembly folder.  That would add the DLL to the Global Assembly Cache (GAC) and make it accessible in Visual Studio.  However, as is often the case, Window 7 is different.  Even if you have Administrator privileges on your machine, you still do not have permission to drop a file in the assembly folder.  Undaunted, I thought about using the old DOS command line utility gacutil.exe.  Microsoft developed the tool as part of the .Net framework, and it is available in the Windows SDK Framework Tools.  If you have never used gacutil.exe before, you can find out everything you ever wanted to know but were afraid to ask here: http://msdn.microsoft.com/en-us/library/ex0ss12c(v=vs.80).aspx .  Unfortunately, if you do not have the Windows SDK loaded on your development machine, you will need to install it to use gacutil, but it is relatively quick and painless, and the framework tools are very useful.  Look here for your latest SDK: http://www.microsoft.com/download/en/search.aspx?q=Windows%20SDK .   After installing the SDK, I tried installing my DLL to the GAC by running gacutil from a DOS command line: That’s odd.  Microsoft is shipping a tool that cannot be executed even with Administrator rights?  Let me stop here and say that I am by no means a Windows security expert, so I actually did contact my system administrators, and they were not sure how to fix the problem….there must be a super administrator access level, but it isn’t available to your average developer in my company.  The solution outlined here is working within the boundaries of a normal windows Administrator. So, now the hacker in me bubbles to the surface.  What if I were to create a simple BAT file containing the gacutil command?  It’s so crazy it just might work!  Ugh!  I was starting to think this would never work, but then I realized that simply executing a batch program did not change my level of access.  Typically in Windows 7, you would select the “Run As Administrator” option to temporarily act as an administrator for the purpose of executing a process.  However, that option is not available for BAT files run from the command line.  SOLUTION: Create a desktop shortcut to execute the BAT file, which in turn will execute the line command…..are you still with me?  I created a shortcut and pointed it to my batch file.  Theoretically, all I need to do now is right-click on the shortcut and select “Run As Administrator” and we’re good, right?  Well, kinda.  If you notice the syntax of my BAT file, the name of the DLL is passed in as a parameter.  Therefore, I either have to hard-code the file name in the BAT program (YUCK!!), or I can leave the parameter and drag the DLL file to the shortcut and drop it.  Sweet, drag-and-drop works for me…..but if I use the drag-and-drop method, there is no way for me to right-click and select “Run As Administrator”.  That is not a problem…..I simply have to adjust the properties of the shortcut I created and I am in business.  I Right-clicked on the shortcut and select “Properties”.  Under the “Shortcut” tab there is an “Advanced” button…..I clicked it. All I needed to do was check the “Run As Administrator” box: In summary, what I have done is create a BAT file to execute a command line utility, gacutil.exe.  Then, rather than executing the BAT file from the command line, I created a desktop shortcut to run it and set the shortcut properties to “Run As Administrator”.  This will effectively mean I am executing the command line utility with Administrator privileges.  Pretty sneaky. Now, when I drag the DLL file  over to the shortcut, it starts the BAT file and adds the DLL to the assembly cache.  I created another BAT file to remove a DLL from the GAC in case the need should arise.  The code for that is: Give it a try.  I can’t imagine why updating the GAC has been made into such a chore in Windows 7.  Hopefully there is a service pack in the works that will give developers the functionality they had in Windows XP, but in the meantime, this workaround is extremely useful.

    Read the article

  • what's wrong with my Ubuntu 11.10 bind9 configuration?

    - by John Bowlinger
    I've followed several tutorials on installing your own nameservers and I'm pretty much at my wit's end, because I cannot get them to resolve. Note, the actual domain and ip address has been changed for privacy to example.com and 192.168.0.1. My named.conf.local file: zone "example.com" { type master; file "/var/cache/bind/example.com.db"; }; zone "0.168.192.in_addr.arpa" { type master; file "/var/cache/bind/192.168.0.db"; }; My named.conf.options file: options { forwarders { 192.168.0.1; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; My resolv.conf file: search example.com. nameserver 192.168.0.1 My Forward DNS file: ORIGIN example.com. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2012083101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL example.com. NS ns1.example.com. example.com. NS ns2.example.com. example.com. MX 10 mail.example.com. @ IN A 192.168.0.1 ns1.example.com IN A 192.168.0.1 ns2.example.com IN A 192.168.0.2 mail IN A 192.168.0.1 server1 IN A 192.168.0.1 gateway IN CNAME ns1.example.com. headoffice IN CNAME server1.example.com. smtp IN CNAME mail.example.com. pop IN CNAME mail.example.com. imap IN CNAME mail.example.com. www IN CNAME server1.example.com. sql IN CNAME server1.example.com. And my reverse DNS: $ORIGIN 0.168.192.in-addr.arpa. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2009013101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL 1 PTR mail.example.com. 1 PTR server1.example.com. 2 PTR ns1.example.com. Yet, when I restart bind9 and do: host ns1.example.com localhost I get: Using domain server: Name: localhost Address: 127.0.0.1#53 Aliases: Host ns1.example.com.example.com not found: 2(SERVFAIL) Similarly, for: host 192.168.0.1 localhost I get: ;; connection timed out; no servers could be reached Anybody know what's going on? Btw, my domain name "www.example.com" that I've used in this question is being forwarded to my ISP's nameservers. Would that affect my bind9 configuration? I want to learn how to do set up nameservers on my own for learning, so that is why I'm going through all this trouble.

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. From left to right: Brian Curran, John Kembel, Seth Godin, and George Kembel The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David VapGroup Vice PresidentOracle Applications Product Development

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Oracle RightNow CX for Good Customer Experiences

    - by Andreea Vaduva
    Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers. One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences. Establish a knowledge foundation Empowering the customer Empower employees Offer multi-channel choice Listen to the customer Design seamless experiences Engage proactively Measure and improve continuously The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer. The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing. Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve. The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes. Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback. It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars. It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer. The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post. This leaves step 8 – Measuring and Improving: How does a business know whether it’s doing the right thing? How does it know if its customers are happy? How does it know if its staff are being productive? How does it know if its staff are being effective? Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease. Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered: RightNow Customer Service Administration Rel 12.02 (3 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course familiarises users with the tasks and concepts needed to configure and maintain their system. RightNow Customer Portal Designer and Contact Center Experience Designer Administration Rel 12.02 (2 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course introduces basic CP structure and how to make changes to the look, feel and behaviour of their self-service pages RightNow Analytics Rel 12.02 (2 days) Available as In Class, Live Virtual Class and Training On Demand (Release 11.11 is available as In Class and Live Virtual Class) This course equips users with the skills necessary to understand data supplied by standard reports and to create custom reports RightNow Integration and Customization For Developers Rel 12.02 (5-days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course is for experienced web developers and offers an introduction to Add-In development using the Desktop Add-In Framework and introduces the core knowledge that developers need to begin integrating Oracle RightNow CX with other systems A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team. On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality. I’m a fan of the application, and I think I’ve made that clear: It’s completely geared up to providing customers with support at point of need. It can be configured to meet even the most stringent of business requirements. Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible. About the Author: Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses: RightNow Customer Service Administration RightNow Analytics RightNow Customer Portal Designer and Contact Center Experience Designer Administration RightNow Marketing and Feedback

    Read the article

  • Yahoo is sending our server's transactional email to the Spam folder, even though we have set up SPF and DKIM

    - by Derrick Miller
    Yahoo Mail is sending our server's transactional emails to the Spam folder, even though we have taken quite a few anti-spam steps. By contrast, Gmail allows the messages through to the inbox just fine. Here are the things which are in place: SPF is set up for the domain holsteinplaza.com. Yahoo reports spf=pass in the message headers. DKIM is set up for the domain holsteinplaza.com. Yahoo reports dkim=pass in the message headers. We have a proper reverse DNS entry for the sending mail server. Name - IP matches IP - Name. Neither Domainkeys nor SenderID are set up. From what I can tell, DKIM is the way of the future, and there is not much to be gained from adding Domainkeys or SenderID. Following are the headers. Any ideas what more I should do to get Yahoo to stop flagging the emails as spam? From Holstein Plaza Auctions Sat Jun 25 18:30:08 2011 X-Apparently-To: [email protected] via 98.138.90.132; Sat, 25 Jun 2011 18:30:11 -0700 Return-Path: <[email protected]> X-YahooFilteredBulk: 70.32.113.42 Received-SPF: pass (domain of holsteinplaza.com designates 70.32.113.42 as permitted sender) X-YMailISG: i_vaA_QWLDuLOmXhDjUv3aBKJl5Un6EiP6Yk2m4yn3jeEuYK MkhpqIt9zDUbHARCwXrhl9pqjTANurGVca7gytSs.mryWVQcbWBx.DaItWRb VcyrIzwMzXKCSeu06H2a.cJ7HG5vJLJaKmHUUI_1ttXKn_Aegiu5yHvFX83R Lpth0witO9zfaKvOMaJV3LAxpIpFOydwvq1cqjZ8nURxQbxM3Cl.QW7MxxrC 09qLVn_D_xSdU94QdU22IsVmlaRHv.uU5dnIazu.KSkhKpYykDoZA2SH0SY4 JmTZj3LP8N926xXVDzYQ5K6QvKuJL5g0d9pYZx3KC59sgIu5oHlJ3Q15RdKb f3OJw0PR6oIyJ2yStVr8vfbDgOfj3qig03.Tw6g6MMNpv1G7Cuol4oJeUaYP xELxX6dHgBgCSuWMcbsrxbK4BIXcS2qhpMqYQ4Isk.XXyA8uvmFXyvgc1ds5 8jo0rW.Wsw.55Z.KTPaQ0gHXj0T3OGppYMELSJv1iuhPyyAnZpmq01CU0Qd5 CcRgdyW3HaqhmpXqJCS0Clo16zXA4HmAjR0tgIQrHRLc3D9N02AOzvmDgCb1 vCh0p00QeKVq8UNkcShPRxZFKi9khtkLhPBlXEKkhJ76zyDmHUxTY.dQHVVD 8D2hx7BxbqI9DINI8x5oR5Q8hYkZqHYQsmGNkaU77O2BnsEv5WxMEmzrBJ4Z h8zGCidgYPiZycZfnfaBp0Xb4tya2WMTN45W02JFcO1qq_UMJ9xPeqZhPEj. j9YvBAC8324GGF.c8eWcNB2VB34QHgTcVUl3.c0XUCuncls9Cyg4L7AoIdCi HvAklSzDDu9nW6732VEipV9FJ_JkDupDNQU2hfiPG.3OeF8GwTnVYnEn0EiZ aO0NCnZhXuLDcN3K7ml3846yRdASvzPFs9s4aJkzR0FkhVvptiMBEOdRkKdG wHWmvWpK4GTZpW4yU7CnKpW2MiWWn1MP0h_CCZFKs5.3mfmfPjPVIABN_RuU Q8ex5hdKnKlQiqK56LzcPRnYmNtrwdsUX9CYn9d6cPpXR_Bi5jrNJMNzdFvq lGO0CBT4QPe2V45U8PtpMitttuDA1cCvmyBPFswxNlL0jyX0a_W.vl0YW5.d HhDItpHhDxKRUscM28IR.exetq4QCzyM X-Originating-IP: [70.32.113.42] Authentication-Results: mta1267.mail.ac4.yahoo.com from=holsteinplaza.com; domainkeys=neutral (no sig); from=holsteinplaza.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO predator.axis80.com) (70.32.113.42) by mta1267.mail.ac4.yahoo.com with SMTP; Sat, 25 Jun 2011 18:30:11 -0700 Received: (qmail 1440 invoked by uid 48); 25 Jun 2011 21:30:09 -0400 To: [email protected] Subject: this is a test X-PHPMAILER-DKIM: phpmailer.worxware.com DKIM-Signature: v=1; a=rsa-sha1; q=dns/txt; l=203; s=auction; t=1309051808; c=relaxed/simple; h=From:To:Subject; d=holsteinplaza.com; [email protected]; z=From:=20Holstein=20Plaza=20Auctions=20<[email protected]> |To:[email protected] |Subject:=20this=20is=20a=20test; bh=B3Tw5AQb1va627KEoazuFEBZ0fg=; b=oQ5uFq+oekPTGhszyIritjuuIAi3qPNyeitu+aWMhdx3oC6O2j5hJsDFpK0sS5fms7QdnBkBcEzT0iekEvn9EfAdCkGZ2KrtEC0yv7QKQcrjXxy07GJpj9nq0LYbgOuPdw8mGvKxlRZ+jFBX0DRJm0xXFLkr+MEaILw7adHTCCM= Date: Sat, 25 Jun 2011 21:30:08 -0400 From: Holstein Plaza Auctions <[email protected]> Reply-to: Holstein Plaza Auctions <[email protected]> Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="iso-8859-1" Content-Length: 195

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Richard Lefebvre
    The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David Vap Group Vice President Oracle Applications Product Development

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • Configuring a PIX 506e for Asterisk

    - by orthogonal3
    Hi all! I'm having problems configuring a old Cisco PIX running 6.3 and wondered if anyone can lend a hand? Simply put I have a PIX 506e that I want to put in my VoIP data path. I can't update it and getting a compat version of Java for that version of PIX is tough so I can't log onto the web interface. The PIX straddles two networks..... 192.168.5.0 on the inside, ...50.0 on the outside both net masks are 255.255.255.0 I have a local Asterisk server cluster with a single service IP (<local asterisk>) SIP is on UDP 5060 and RTP (for the voip data) is on UDP 18000-18999 I know thats a big range but hey may as well. I need the 192.168.5.0 net to have web and ftp access for updates and the like. DHCP, DNS and NTP is already provided on that network so I don't need external DNS access. So I think I want the following rules: SIP or RTP from <my itsp> arriving at <outside voip ip> NATed to <local asterisk> SIP or RTP able to do the reverse route (should be covered by high sec - low sec??) HTTP and FTP access outbound for software update for the servers etc I have the following config at the minute - and I think I'm almost there (I hope)... interface ethernet0 auto interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 enable password wouldyouliketobeapeppertoo encrypted passwd wouldyouliketobeapeppertoo encrypted hostname afirewall domain-name adomain fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 access-list acl_ping permit icmp any any access-list voip permit ip host <my itsp> host <local asterisk> mtu outside 1500 mtu inside 1500 ip address outside <outside pix ip> 255.255.255.0 ip address inside <inside pix ip> 255.255.255.0 arp timeout 14400 global (outside) 1 <outside generic ip> nat (inside) 1 192.168.5.0 255.255.255.0 0 0 static (inside,outside) <outside voip ip> <local asterisk> netmask 255.255.255.255 0 0 static (outside,inside) <local asterisk> <outside voip ip> netmask 255.255.255.255 0 0 access-group acl_ping in interface outside access-group acl_ping in interface inside route outside 0.0.0.0 0.0.0.0 <my next hop router> 1 route outside <my itsp> 255.255.255.255 <my next hop router> 1 I think I just need a hand with the access-lists and NAT/static rules. Would anyone be able to help as I've RTFM'd the Cisco docs a few times and they're heavy. Wishing I'd completed my CCNA now! Thanks all for any help, Phil

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >