Search Results

Search found 21309 results on 853 pages for 'electronic leasing process'.

Page 117/853 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Windows Server 2008 Task Scheduler Tasks Not Executing

    - by omatase
    I've been having an intermittent problem for some time now with the Windows Task Scheduler that I can't work out. I use the task scheduler to run a C# app I've written that has various plugins used to ensure production systems are working. This task scheduler itself is actually a production system so I have one simple task that executes every 8 minutes to notify an external monitoring system that the task scheduler is still up. If this external service fails to receive an "all-clear" at least once every 15 minutes (or so I don't remember the exact number right now) it will message us that the monitoring system is down. In the past we've had intermittent "down" messages from time to time and each time I've investigated the cause I was unable to find any problems. So this time I wanted to ask the StackOverflow community since it doesn't look like I'll have luck on my own. This morning at 2:32 AM the task fired (exactly 8 minutes after the previous firing) however the task didn't fire again until 3:28. There are no errors that I can see in the Event Viewer at this time. When I look at the Task Scheduler log there are no errors there either. Here is what the log looks like though: Information 6/11/2011 3:28:56 AM 102 Task completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:56 AM 201 Action completed (2) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 129 Created Task Process Info Information 6/11/2011 3:28:55 AM 200 Action started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 100 Task Started (1) d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:55 AM 107 Task triggered on scheduler Info d6cf2412-269e-48bf-9f40-4a863347baad Information 6/11/2011 3:28:15 AM 102 Task completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 201 Action completed (2) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:15 AM 102 Task completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 201 Action completed (2) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:15 AM 102 Task completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:15 AM 102 Task completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 201 Action completed (2) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:15 AM 102 Task completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:15 AM 201 Action completed (2) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 129 Created Task Process Info Information 6/11/2011 3:28:10 AM 200 Action started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 100 Task Started (1) 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 100 Task Started (1) b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 3:28:10 AM 319 Task Engine received message to start task (1) Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 0e62ad3e-1f6e-40c0-9155-19f0108dee22 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info c165754f-e3e6-4176-a327-11f9c06c39a5 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 19743755-47b6-4b98-9bec-052193be9496 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 79328289-f742-49dd-aa0d-c3d05db50895 Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info 556c07dc-2724-4a21-a97e-dc4abd56f94d Information 6/11/2011 3:28:10 AM 107 Task triggered on scheduler Info b91fe5ce-39ef-42fb-adbe-bd8be012c00a Information 6/11/2011 2:32:56 AM 102 Task completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:56 AM 201 Action completed (2) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 129 Created Task Process Info Information 6/11/2011 2:32:55 AM 200 Action started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 100 Task Started (1) 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Information 6/11/2011 2:32:55 AM 319 Task Engine received message to start task (1) Information 6/11/2011 2:32:55 AM 107 Task triggered on scheduler Info 16e4f2c3-a340-410a-9c14-4bfe0861fdd5 Seems kind of strange. I also have two other C# apps that run and check something each hour on the hour using task scheduler. If I look at the history for those I can see that they didn't execute at 3 AM either! They all waited until 3:28 to start as well. If I look at "tasks completed" in the Event Viewer it shows that only one task was able to run between the 2:32 AM to 3:28 AM time period. The task was "\Microsoft\Windows\RAC\RACAgent" And here's what it looked like: Information 6/11/2011 3:18:09 AM 102 Task completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:09 AM 201 Action completed (2) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 129 Created Task Process Info Information 6/11/2011 3:18:08 AM 200 Action started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 100 Task Started (1) 00c53a85-ba20-4666-80db-fbbe2492c0ad Information 6/11/2011 3:18:08 AM 319 Task Engine received message to start task (1) I appreciate any ideas anyone may have.

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

  • Logging Output of Azure Startup Tasks to the Event Log

    - by Your DisplayName here!
    This can come in handy when troubleshooting: using System; using System.Diagnostics; using System.Text;   namespace Thinktecture.Azure {     class Program     {         static EventLog _eventLog = new EventLog("Application", ".", "StartupTaskShell");         static StringBuilder _out = new StringBuilder(64);         static StringBuilder _err = new StringBuilder(64);           static int Main(string[] args)         {             if (args.Length != 1)             {                 Console.WriteLine("Invalid arguments: " + String.Join(", ", args));                 _eventLog.WriteEntry("Invalid arguments: " + String.Join(", ", args));                                 return -1;             }               var task = args[0];               ProcessStartInfo info = new ProcessStartInfo()             {                 FileName = task,                 WorkingDirectory = Environment.CurrentDirectory,                 UseShellExecute = false,                 ErrorDialog = false,                 CreateNoWindow = true,                 RedirectStandardOutput = true,                 RedirectStandardError = true             };               var process = new Process();             process.StartInfo = info;               process.OutputDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _out.AppendLine(e.Data);                     }                 };             process.ErrorDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _err.AppendLine(e.Data);                     }                 };               process.Start();             process.BeginOutputReadLine();             process.BeginErrorReadLine();             process.WaitForExit();               var outString = _out.ToString();             var errString = _err.ToString();               if (!string.IsNullOrWhiteSpace(outString))             {                 outString = String.Format("Standard Out for {0}\n\n{1}", task, outString);                 _eventLog.WriteEntry(outString, EventLogEntryType.Information);             }               if (!string.IsNullOrWhiteSpace(errString))             {                 errString = String.Format("Standard Err for {0}\n\n{1}", task, errString);                 _eventLog.WriteEntry(errString, EventLogEntryType.Error);             }               return 0;         }     } } You then wrap your startup tasks with the StartupTaskShell and you’ll be able to see stdout and stderr in the application event log.

    Read the article

  • W3WP crashes when initializing a collection

    - by asbjornu
    I've created an ASP.NET MVC application that has an initializer attached to the PreApplicationStartMethodAttribute. When initializing, a collection is instantiated that implements an interface I've defined. When I instantiate this collection, w3wp.exe crashes with the following two incomprehensible entries in the event log: Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000001177 Faulting process id: 0x1348 Faulting application start time: 0x01cb0224882f4723 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7600.16385 P3: 4a5bd0eb P4: clr.dll P5: 4.0.30319.1 P6: 4ba21eeb P7: c00000fd P8: 0000000000001177 P9: P10: Attached files: These files may be available here: Analysis symbol: Rechecking for solution: 0 Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb Report Status: 0 If I remove the instantiation of the collection, the application starts normally. If I leave the instantiation, w3wp crashes. If I modify the interface, w3wp still crashes. I've tried every variation I could come up with on the theme of keeping the instantiation but doing everything else differently, but w3wp still crashes. My biggest issue here is that I have absolutely no idea why w3wp is crashing. It's not a StackOverflowException or anything concrete like that, all I get is the unintelligent junk cited above. I've tried to use DebugDiag and IISState to debug the w3wp process, but DebugDiag is only available for post-dump analysis in x64 (I'm running on Windows 7 x64, so the w3wp process is thus 64 bit) and IISStat says the following when I try to run it: D:\Programs\iisstate>IISState.exe -p 9204 -d Symbol search path is: SRV*D:\Programs\iisstate\symbols*http://msdl.microsoft.com/download/symbols IISState is limited to processes associated with IIS. If you require a generic debugger, please use WinDBG or CDB. They are available for download from http://www.microsoft.com/ddk/debugging. This error may also occur if a debugger is already attached to the process being checked. Incorrect Process Attachment I've double-checked 10 times that the process ID of my w3wp process is correct. I'm suspecting that IISState too only can debug x86 processes. Setting a breakpoint anywhere in the application does absolutely nothing. The break point isn't hit and w3wp crashes as soon as the request comes through to IIS from the browser. Starting the application with F5 in Visual Studio 2010 or starting another application to get the w3wp process up and running and then attaching the VS2010 debugger to it and then visiting the faulting application doesn't help. I've also tried to add an HTTP module as described in KB-911816 as well as add this to my web.config file: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="true" /> </runtime> </configuration> Needless to say, it makes absolutely no difference. So I'm left with no way to debug the w3wp process, no way to extract any information from it and complete garbage dumped in my event log. If anybody has any idea on how to debug this problem, please let me know! Update My collection was initialized based on RouteTable.Routes which might have thrown an exception (perhaps by not being initialized itself yet in such an early stage of the ASP.NET lifecycle). Postponing the communication with RouteTable.Routes until a later stage solved the problem. While I don't really need an answer to this question anymore, I still find it so obscure that I'll leave it for anyone to comment on and answer, because I found no existing posts on this problem anywhere, so it might be of good reference in the future.

    Read the article

  • C# How to output to GUI when data is coming via an interface via MarshalByRefObject?

    - by Tom
    Hey, can someone please show me how i can write the output of OnCreateFile to a GUI? I thought the GUI would have to be declared at the bottom in the main function, so how do i then refer to it within OnCreateFile? using System; using System.Collections.Generic; using System.Runtime.Remoting; using System.Text; using System.Diagnostics; using System.IO; using EasyHook; using System.Drawing; using System.Windows.Forms; namespace FileMon { public class FileMonInterface : MarshalByRefObject { public void IsInstalled(Int32 InClientPID) { //Console.WriteLine("FileMon has been installed in target {0}.\r\n", InClientPID); } public void OnCreateFile(Int32 InClientPID, String[] InFileNames) { for (int i = 0; i < InFileNames.Length; i++) { String[] s = InFileNames[i].ToString().Split('\t'); if (s[0].ToString().Contains("ROpen")) { //Console.WriteLine(DateTime.Now.Hour+":"+DateTime.Now.Minute+":"+DateTime.Now.Second+"."+DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); Program.ff.enterText(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); } else if (s[0].ToString().Contains("RQuery")) { Console.WriteLine(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); } else if (s[0].ToString().Contains("RDelete")) { Console.WriteLine(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[0])) + "\t" + getRootHive(s[1])); } else if (s[0].ToString().Contains("FCreate")) { //Console.WriteLine(DateTime.Now.Hour+":"+DateTime.Now.Minute+":"+DateTime.Now.Second+"."+DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + s[2]); } } } public void ReportException(Exception InInfo) { Console.WriteLine("The target process has reported an error:\r\n" + InInfo.ToString()); } public void Ping() { } public String getProcessName(int ID) { String name = ""; Process[] process = Process.GetProcesses(); for (int i = 0; i < process.Length; i++) { if (process[i].Id == ID) { name = process[i].ProcessName; } } return name; } public String getRootHive(String hKey) { int r = hKey.CompareTo("2147483648"); int r1 = hKey.CompareTo("2147483649"); int r2 = hKey.CompareTo("2147483650"); int r3 = hKey.CompareTo("2147483651"); int r4 = hKey.CompareTo("2147483653"); if (r == 0) { return "HKEY_CLASSES_ROOT"; } else if (r1 == 0) { return "HKEY_CURRENT_USER"; } else if (r2 == 0) { return "HKEY_LOCAL_MACHINE"; } else if (r3 == 0) { return "HKEY_USERS"; } else if (r4 == 0) { return "HKEY_CURRENT_CONFIG"; } else return hKey.ToString(); } } class Program : System.Windows.Forms.Form { static String ChannelName = null; public static Form1 ff; Program() // ADD THIS CONSTRUCTOR { InitializeComponent(); } static void Main() { try { Config.Register("A FileMon like demo application.", "FileMon.exe", "FileMonInject.dll"); RemoteHooking.IpcCreateServer<FileMonInterface>(ref ChannelName, WellKnownObjectMode.SingleCall); Process[] p = Process.GetProcesses(); for (int i = 0; i < p.Length; i++) { try { RemoteHooking.Inject(p[i].Id, "FileMonInject.dll", "FileMonInject.dll", ChannelName); } catch (Exception e) { } } } catch (Exception ExtInfo) { Console.WriteLine("There was an error while connecting to target:\r\n{0}", ExtInfo.ToString()); } } } }

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • Convert a DVD Movie Directly to AVI with FairUse Wizard 2.9

    - by DigitalGeekery
    Are you looking for a way to backup your DVD movie collection to AVI?  Today we’ll show you how to rip a DVD movie directly to AVI with FairUse Wizard. About FairUse Wizard FairUse Wizard 2.9 uses the DivX, Xvid, or h.264 codec to convert DVD to an AVI file. It comes in both a free version and commercial version. The free, or “Light” version, can create files up 700MB while the commercial version can output a 1400MB file. This will allow you to back up your movies to CD, or even multiple movies on a single DVD. FairUse Wizard states that it does not work on copy protected discs, but we’ve seen it work on all but some of the most recent copy protection. For this tutorial we’re using the free Light Edition to convert a DVD to AVI. They also offer a commercial version that you can get for $29.99 and it offers even more encoding possibilities for converting video to you portable digital devices. Installation and Configuration Download and install FairUse Wizard. (Download link below). Once the install is complete, open FairUse Wizard by going to Start > All Programs >  FairUse Wizard 2 >  FairUse Wizard 2.   FairUse Wizard will open on the new project screen. Select “Create a new project” and type a project name into the text box. This will be used as the file output name.  Ex: A project name of Simpsons Movie will give you an output file of Simpsons Movie.avi.   Next, browse for a destination folder for the output file and temp files. Note that you will need a minimum of 6 GB of free disk space for the conversion process. Note: Much of that 6 GB will be used for temporary files that we will delete after the conversion process.   Click on the Options button at the bottom.   Under Preferences, choose your preferred video codec and file output size. XviD and x264 are installed by default. If you prefer to use DivX, you will have to install it separately. Also note the “Two pass” option. Checking the “Two pass” box will encode your video twice for higher quality, but will take more time. Un-checking the box will speed up the conversion process.   Under Audio track, note that English subtitles are enabled by default, so to remove the subtitles, you will need to change the dropdown list so it shows only a dash (-). You can also select “Use TV Mode” if your primary playback will be on a 4:3 TV screen. Click “Next.” Full Auto Mode vs. Manual Mode You should now be back to the initial screen. Next, we’ll need to determine whether or not we can use “Full Auto Mode” to convert the movie. The difference is that “Full Auto Mode” will automatically perform a few steps that you will otherwise have to do manually. If you choose the “Full Auto Mode” option, FairUse Wizard will look for the video on the DVD with the longest duration and assume it is the chain that it should convert to AVI. It’s possible, however, your disc may contain a few chains of similar size, such as a theatrical cut and director’s cut, and the longest chain may not be the one you wish to convert. Make sure that “Full auto mode” is not checked yet, and click “Next.”   FairUse Wizard will parse the IFO files and display all video chains longer than 60  seconds. In most cases, you will only find that the largest chain is the one closely matching the duration of the movie. In these instances, you can use “Full Auto Mode.” If you find more than one chain that are close in duration to the length of the movie, consult the literature on the DVD case, or search online, to find the actual running time of the movie. If the proper file chain is not the longest chain, you won’t be able to use “Full Auto Mode.”   Full Auto Mode To use “Full Auto Mode,” simply click the “Back” button to return to the initial screen Now, place a check in the “Full auto mode” check box. Click “Next.” You will then be prompted to chose your DVD drive, then click “OK.” FairUse Wizard will parse the IFO files… … and then prompt you to Select your drive that contains the DVD one more time before beginning the conversion process. Click “OK.”   Manual Mode If you cannot (or don’t wish to) use Full Auto Mode, choose the appropriate video chain and click “Next.” FairUse Wizard will first go through the process of indexing the video. Note: If you get a runtime error during this portion of the process, it likely means that FairUse Wizard cannot handle the copy protection, and thus cannot convert the DVD. FairUse Wizard will automatically detect a cropping region. If necessary, you can edit the cropping region by adjusting the cropping region settings to the left. Click “Next.” Next, click “Auto Detect” to choose the proper field combination. Click “OK” on the pop up window that displays your Field Mode. Then click “Next.” This next screen is mainly comprised of settings from the Options screen. You can make changes at this point such as codec or output size. Click “Next” when ready.   Video Conversion Now the video conversion process will begin. This may take a few hours depending on your system’s hardware. Note: There is a check box to “Shutdown computer when done” if you choose to run the conversion overnight or before leaving for work. The first phase will be video encoding… Then the audio… If you chose the “Two Pass” option, your video video will be encoded again on 2nd pass. Then you’re finished. Unfortunately, FairUse Wizard doesn’t clean up after itself very well. After the process is complete, you’ll want to browse to your output directory and delete all the temporary files as they take up a considerable amount of hard drive space. Now you’re ready to enjoy your movie. Conclusion FairUse Wizard is a nice way to backup your DVD movies to good quality .avi files. You can store them on your hard drive, watch them on a media PC, or burn them to disc. Many DVD players even allow for playback of DivX or XviD encoded video from a CD or DVD. For those of you with children, you can burn that AVI file to CD for your kids, and keep your original DVDs stored safely out of harms way. Download Download FairUse Wizard 2.9 LE Similar Articles Productive Geek Tips Kantaris is a Unique Media Player Based on VLCHow to Make/Edit a movie with Windows Movie Maker in Windows VistaAutomatically Mount and View ISO files in Windows 7 Media CenterTune Your ClearType Font Settings in Windows VistaAdd Images and Metadata to Windows 7 Media Center Movie Library TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Convert DVD to MP4 / H.264 with HD Decrypter and Handbrake

    - by DigitalGeekery
    Are you looking for a way to convert your DVD collection to high quality MP4 files? Today we are going to take a look at using DVDFab HD Decrypter along with Handbrake to convert DVDs to MP4 using the H.264 codec.  Process Overview Handbrake is a great file conversion application, but it unfortunately can’t handle DVD copy protection. For that we will use DVDFab’s HD Decrypter. HD Decrypter is the always free portion of the DVDFab application. What HD Decrypter will do, is remove the copy protection from your DVD, and copy the Video-TS and Audio-TS folders to your hard drive. Once the copy protection is gone, we will use Handbrake to convert the files to MP4 format with H.264 compression. Note: You’ll get full access to all the options in DVDFab  during the 30 trial period. However, the HD Decrypter is free and will continue to work. Ripping the DVD Install both Handbrake and DVDFab HD Decrypter. (Download links below) Once the applications are installed, place your DVD into your DVD drive and open DVDFab. On the welcome screen, click “Start DVDFab.”   You’ll be prompted to choose your region. Click “OK.” The disc is analyzed and opened… You’ll be brought to the main interface. Make sure you have the Full Disc option selected at the left panel and “Copy DVD-Video (VIDEO_TS folder) is selected. Click “Start.” Don’t be confused by the “DVD to DVD” option pop up. We won’t actually be burning to DVD. The HD Decrypter portion of the DVDFab suite is part of the DVD to DVD option. Click “OK.” The DVD will be ripped to your hard drive. When the copy process is complete, you’ll be prompted to insert media to start the write process. We aren’t going to be burning to disc, so just click Cancel then close out of DVDFab.   Converting to MP4 Now we are ready to convert Open Handbrake and click on the “Source” button at the top left. Select DVD / VIDEO_TS folder from the drop down list. Now we need to browse for the location where DVDFab HD Decrypter copied your movie. By default, that location will be the \DVDFab\Temp\FullDisc directory in your Documents folder. For example, in Windows 7, it would be: C:\Users\%username%\Documents\DVDFab\Temp\FullDisc\[Name of Your DVD] Select the folder, and click “OK.” You may be prompted to set a default path in Handbrake. This is an optional step. Click “OK.” If you’d like to set a default destination folder, Go to Tools on the top menu, select Options. On the General tab, click “Browse” to select a destination output folder. Click “Close” when Finished.   Next, click the dropdown list next to “Title.” Select the title that matches the length of the movie. It’s possible you may have see more than one title with a similar length. If so, consult the DVD information, or a site like IMDB.com, to find the proper movie title length. Select your container under Output Settings. This will be your final output file extension. We will be using MP4 for this example. You also have the option of MKV.   If you didn’t set up a default destination folder, you’ll need to select one by clicking the “Browse” button. You can manually customize the output file name and change the output file extension to .mp4 (Unless you prefer the iPod friendly .m4v extension). Settings There are a variety of custom settings that can be changed either through the tabs listed under Output Settings, or by selecting one of the Presets to the right. If converting exclusively for any of the devices listed in the preset list, simply click on that device and the settings will be automatically applied in the Output Settings tabs. For more Universal (non-Apple) devices or output, select the Normal profile.   For the most part, the presets will suit quite nicely. However, you can further customize settings if you’d like. The Picture tab allows you to tweak the size or cropping region. You must change Anamorphic to Loose or Custom to change the size.   The Video tab allows you to choose your codec. H.264 is the default. You also have the option to choose a target (output) size. The Constant Quality is recommended to be set between 59% – 63%. Anything over 70% will likely result in an output file larger than the input without any improved quality. On the Subtitles tab, you can select an available subtitle from the dropdown list and click “Add” to add it to the output file. When you’ve finished any customizations you are ready to begin the conversion process. Click “Start.” A Command window will open and you can follow the process. You’ll probably want to find something to do in the meantime as the process could take a couple of hours. When the process completes, you’re ready to watch your video.   Although it’s a time consuming process that involves a couple steps, this method will give you high quality H.264 video files. If you want to rip and burn your DVD’s to ISO check out our article on how to rip and convert DVD’s to an ISO image. Links Download DVDFab HD Decrypter (Part of the DVDFab suite) Download Handbrake Similar Articles Productive Geek Tips Enjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 FormatCalculate with Qalculate on LinuxHow To Convert Video Files to MP3 with VLCConvert a Row to a Column in Excel the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • Why does Akonadi on KDE 4.6.0 refuse to start?

    - by Patches
    Akonadi refuses to start on my fresh installation of KDE 4.6.0 from the kubuntu-backports PPA on Ubuntu 10.10 Maverick Meerkat, preventing me from usking KMail. Here is the full error output: patches@pleistocene:~/.local/share$ akonadictl start Starting Akonadi Server... done. patches@pleistocene:~/.local/share$ Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb772e400] 3: [0xb772e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6e9f941] 5: /lib/libc.so.6(abort+0x182) [0xb6ea2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb74d62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb757168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7581425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb758295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6e8bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb77ae400] 3: [0xb77ae416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6f1f941] 5: /lib/libc.so.6(abort+0x182) [0xb6f22e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75562dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75f168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb7601425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb760295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6f0bce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb778b400] 3: [0xb778b416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6efc941] 5: /lib/libc.so.6(abort+0x182) [0xb6effe42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75332dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb75ce68e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb75de425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb75df95d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6ee8ce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) search paths: ("/home/patches/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/usr/games", "/usr/sbin", "/usr/local/sbin", "/usr/local/libexec", "/usr/libexec", "/opt/mysql/libexec", "/opt/local/lib/mysql5/bin", "/opt/mysql/sbin") Found mysql_install_db: "/usr/bin/mysql_install_db" Found mysqlcheck: "/usr/bin/mysqlcheck" Database process exited unexpectedly during initial connection! executable: "/usr/sbin/mysqld-akonadi" arguments: ("--defaults-file=/home/patches/.local/share/akonadi//mysql.conf", "--datadir=/home/patches/.local/share/akonadi/db_data/", "--socket=/home/patches/.local/share/akonadi/socket-pleistocene/mysql.socket") stdout: "" stderr: "Could not open required defaults file: /home/patches/.local/share/akonadi//mysql.conf Fatal error in defaults handling. Program aborted 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Warning] Can't create test file /home/patches/.local/share/akonadi/db_data/pleistocene.lower-test 110209 16:41:12 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld-akonadi: Can't find file: './mysql/plugin.frm' (errno: 13) 110209 16:41:12 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110209 16:41:12 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name ./ibdata1 InnoDB: File operation call: 'create'. InnoDB: Cannot continue operation. " exit code: 1 process error: "Unknown error" "[ 0: akonadiserver(_Z11akBacktracev+0x35) [0x8086055] 1: akonadiserver() [0x8086516] 2: [0xb784e400] 3: [0xb784e416] 4: /lib/libc.so.6(gsignal+0x51) [0xb6fbf941] 5: /lib/libc.so.6(abort+0x182) [0xb6fc2e42] 6: /usr/lib/libQtCore.so.4(_Z17qt_message_output9QtMsgTypePKc+0x8c) [0xb75f62dc] 7: akonadiserver(_ZN15FileDebugStream9writeDataEPKcx+0xc4) [0x8087574] 8: /usr/lib/libQtCore.so.4(_ZN9QIODevice5writeEPKcx+0x8e) [0xb769168e] 9: /usr/lib/libQtCore.so.4(+0x103425) [0xb76a1425] 10: /usr/lib/libQtCore.so.4(_ZN11QTextStreamD1Ev+0x3d) [0xb76a295d] 11: akonadiserver(_ZN6QDebugD1Ev+0x43) [0x8081b73] 12: akonadiserver(_ZN13DbConfigMysql19startInternalServerEv+0x1c27) [0x810c177] 13: akonadiserver(_ZN7Akonadi13AkonadiServer20startDatabaseProcessEv+0xe3) [0x8087a23] 14: akonadiserver(_ZN7Akonadi13AkonadiServerC1EP7QObject+0xca) [0x8088b6a] 15: akonadiserver(_ZN7Akonadi13AkonadiServer8instanceEv+0x48) [0x808a1d8] 16: akonadiserver(main+0x364) [0x8080fb4] 17: /lib/libc.so.6(__libc_start_main+0xe7) [0xb6fabce7] 18: akonadiserver() [0x8080b81] ] " ProcessControl: Application 'akonadiserver' returned with exit code 255 (Unknown error) "akonadiserver" crashed too often and will not be restarted! I tried moving the ~/.local/share/akonadi folder and running it fresh, and I also tried starting Akonadi from a brand new user, all to no avail. Requested by @djeikyb: patches@pleistocene:~$ ls -ld ~/.local drwxrwx--- 3 patches patches 4096 2011-02-07 03:15 /home/patches/.local patches@pleistocene:~$ mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed patches@pleistocene:~$ mysql_upgrade -S ~/.local/share/akonadi/socket-pleistocene/ Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--socket=/home/patches/.local/share/akonadi/socket-pleistocene/' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/home/patches/.local/share/akonadi/socket-pleistocene/' (111) when trying to connect FATAL ERROR: Upgrade failed

    Read the article

  • The Business of Winning Innovation: An Exclusive Blog Series

    - by Kerrie Foy
    "The Business of Winning Innovation” is a series of articles authored by Oracle Agile PLM experts on what it takes to make innovation a successful and lucrative competitive advantage. Our customers have proven Agile PLM applications to be enormously flexible and comprehensive, so we’ve launched this article series to showcase some of the most fascinating, value-packed use cases. In this article by Keith Colonna, we kick-off the series by taking a look at the science side of innovation within the Consumer Products industry and how PLM can help companies innovate faster, cheaper, smarter. This article will review how innovation has become the lifeline for growth within consumer products companies and how certain companies are “winning” by creating a competitive advantage for themselves by taking a more enterprise-wide,systematic approach to “innovation”.   Managing the Science of Innovation within the Consumer Products Industry By: Keith Colonna, Value Chain Solution Manager, Oracle The consumer products (CP) industry is very mature and competitive. Most companies within this industry have saturated North America (NA) with their products thus maximizing their NA growth potential. Future growth is expected to come from either expansion outside of North America and/or by way of new ideas and products. Innovation plays an integral role in both of these strategies, whether you’re innovating business processes or the products themselves, and may cause several challenges for the typical CP company, Becoming more innovative is both an art and a science. Most CP companies are very good at the art of coming up with new innovative ideas, but many struggle with perfecting the science aspect that involves the best practice processes that help companies quickly turn ideas into sellable products and services. Symptoms and Causes of Business Pain Struggles associated with the science of innovation show up in a variety of ways, like: · Establishing and storing innovative product ideas and data · Funneling these ideas to the chosen few · Time to market cycle time and on-time launch rates · Success rates, or how often the best idea gets chosen · Imperfect decision making (i.e. the ability to kill projects that are not projected to be winners) · Achieving financial goals · Return on R&D investment · Communicating internally and externally as more outsource partners are added globally · Knowing your new product pipeline and project status These challenges (and others) can be consolidated into three root causes: A lack of visibility Poor data with limited access The inability to truly collaborate enterprise-wide throughout your extended value chain Choose the Right Remedy Product Lifecycle Management (PLM) solutions are uniquely designed to help companies solve these types challenges and their root causes. However, PLM solutions can vary widely in terms of configurability, functionality, time-to-value, etc. Business leaders should evaluate PLM solution in terms of their own business drivers and long-term vision to determine the right fit. Many of these solutions are point solutions that can help you cure only one or two business pains in the short term. Others have been designed to serve other industries with different needs. Then there are those solutions that demo well but are owned by companies that are either unable or unwilling to continuously improve their solution to stay abreast of the ever changing needs of the CP industry to grow through innovation. What the Right PLM Solution Should Do for You Based on more than twenty years working in the CP industry, I recommend investing in a single solution that can help you solve all of the issues associated with the science of innovation in a totally integrated fashion. By integration I mean the (1) integration of the all of the processes associated with the development, maintenance and delivery of your product data, and (2) the integration, or harmonization of this product data with other downstream sources, like ERP, product catalogues and the GS1 Global Data Synchronization Network (or GDSN, which is now a CP industry requirement for doing business with most retailers). The right PLM solution should help you: Increase Revenue. A best practice PLM solution should help a company grow its revenues by consolidating product development cycle-time and helping companies get new and improved products to market sooner. PLM should also eliminate many of the root causes for a product being returned, refused and/or reclaimed (which takes away from top-line growth) by creating an enterprise-wide, collaborative, workflow-driven environment. Reduce Costs. A strong PLM solution should help shave many unnecessary costs that companies typically take for granted. Rationalizing SKU’s, components (ingredients and packaging) and suppliers is a major opportunity at most companies that PLM should help address. A natural outcome of this rationalization is lower direct material spend and a reduction of inventory. Another cost cutting opportunity comes with PLM when it helps companies avoid certain costs associated with process inefficiencies that lead to scrap, rework, excess and obsolete inventory, poor end of life administration, higher cost of quality and regulatory and increased expediting. Mitigate Risk. Risks are the hardest to quantify but can be the most costly to a company. Food safety, recalls, line shutdowns, customer dissatisfaction and, worst of all, the potential tarnishing of your brands are a few of the debilitating risks that CP companies deal with on a daily basis. These risks are so uniquely severe that they require an enterprise PLM solution specifically designed for the CP industry that safeguards product information and processes while still allowing the art of innovation to flourish. Many CP companies have already created a winning advantage by leveraging a single, best practice PLM solution to establish an enterprise-wide, systematic approach to innovation. Oracle’s Answer for the Consumer Products Industry Oracle is dedicated to solving the growth and innovation challenges facing the CP industry. Oracle’s Agile Product Lifecycle Management for Process solution was originally developed with and for CP companies and is driven by a specialized development staff solely focused on maintaining and continuously improving the solution per the latest industry requirements. Agile PLM for Process helps CP companies handle all of the processes associated with managing the science of the innovation process, including: specification management, new product development/project and portfolio management, formulation optimization, supplier management, and quality and regulatory compliance to name a few. And as I mentioned earlier, integration is absolutely critical. Many Oracle CP customers, both with Oracle ERP systems and non-Oracle ERP systems, report benefits from Oracle’s Agile PLM for Process. In future articles we will explain in greater detail how both existing Oracle customers (like Gallo, Smuckers, Land-O-Lakes and Starbucks) and new Oracle customers (like ConAgra, Tyson, McDonalds and Heinz) have all realized the benefits of Agile PLM for Process and its integration to their ERP systems. More to Come Stay tuned for more articles in our blog series “The Business of Winning Innovation.” While we will also feature articles focused on other industries, look forward to more on how Agile PLM for Process addresses innovation challenges facing the CP industry. Additional topics include: Innovation Data Management (IDM), New Product Development (NPD), Product Quality Management (PQM), Menu Management,Private Label Management, and more! . Watch this video for more info about Agile PLM for Process

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • PostgreSQL 8.4 won't start after blackout

    - by RiZe
    I have problem with starting PostgreSQL 8.4 on Ubuntu 9.10 Server after blackout. When I try to connect to the database it says: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. When I try to start it by using command sudo -u postgres /etc/init.d/postgresql-8.4 start * Starting PostgreSQL 8.4 database server [ OK ] Netstat output netstat -tulp (No info could be read for "-p": geteuid()=1000 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:postgresql *:* LISTEN - tcp 0 0 192.168.1.35:svn *:* LISTEN - tcp 0 0 192.168.1.35:http-alt *:* LISTEN - tcp 0 0 *:ssh *:* LISTEN - tcp6 0 0 localhost:postgresql [::]:* LISTEN - tcp6 0 0 [::]:ssh [::]:* LISTEN - udp 0 0 *:bootpc *:* - But still don't work so lets restart it sudo -u postgres /etc/init.d/postgresql-8.4 restart * Restarting PostgreSQL 8.4 database server * The PostgreSQL server failed to start. Please check the log output: 2009-11-30 13:39:37 CET LOG: database system was shut down at 2009-11-30 13:39:33 CET 2009-11-30 13:39:37 CET LOG: autovacuum launcher started 2009-11-30 13:39:37 CET LOG: database system is ready to accept connections 2009-11-30 13:39:37 CET LOG: incomplete startup packet 2009-11-30 13:39:38 CET LOG: server process (PID 2240) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:37 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D464C 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:38 CET LOG: autovacuum launcher started 2009-11-30 13:39:38 CET LOG: database system is ready to accept connections 2009-11-30 13:39:38 CET LOG: server process (PID 2248) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D4690 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2256) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:39 CET LOG: record with zero length at 0/11D46D4 2009-11-30 13:39:39 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2264) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:39 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D4718 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:40 CET LOG: server process (PID 2272) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:40 CET LOG: terminating any other active server processes 2009-11-30 13:39:40 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:40 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:40 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D475C 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2280) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47A0 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2288) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47E4 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2296) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D4828 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:42 CET LOG: autovacuum launcher started 2009-11-30 13:39:42 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2304) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D486C 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections 2009-11-30 13:39:43 CET LOG: server process (PID 2312) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:43 CET LOG: terminating any other active server processes 2009-11-30 13:39:43 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:43 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:43 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:43 CET LOG: record with zero length at 0/11D48B0 2009-11-30 13:39:43 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections [fail] So what happened and what can I do to solve this? Thanks for replies

    Read the article

  • WSAECONNRESET (10054) error using WebDrive to map to a Subversion/Apache WebDAV share

    - by Dylan Beattie
    Hello, I'm using WebDrive to map a drive letter to a WebDAV share running on Subversion with the SVNAutoversioning flag enabled. The Subversion server is running CollabNet Subversion Edge with LDAP authentication. When trying to connect using WebDrive, I get: Connecting to site myserver Connecting to http://myserver/webdrive/ Resolving url myserver to an IP address Url resolved to IP address 192.168.0.12 Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Testing directory listing ... Connecting to 192.168.0.12 on port 80 Connected successfully to the server on port 80 Unable to connect to server, error information below Error: Socket receive failure (4507) Operation: Connecting to server Winsock Error: WSAECONNRESET (10054) The httpd.conf file running on the server contains the following section: <Location /webdrive/> DAV svn SVNParentPath "C:\Program Files\Subversion\data\repositories" SVNReposName "My Subversion WebDrive" AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file" SVNListParentPath On Allow from all AuthType Basic AuthName "My Subversion Repository" AuthBasicProvider csvn-file-users ldap-users Require valid-user ModMimeUsePathInfo on SVNAutoversioning on </Location> and in the Apache error_yyyy_mm_dd.log file on the server, I'm seeing this when I try to connect via WebDAV: [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' OPTIONS webdrive:/ [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [5572] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:22 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [5572] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:22 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:25 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd0f18 rmm=0xcd0f48 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:25 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:25 2011] [notice] Parent: Created child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xca2bb0 rmm=0xca2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:25 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:25 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:25 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:25 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:25 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Child process is running [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(408): Child 4368: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 288 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [info] Parent: Duplicating socket 276 and sending it to child process 4368 [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(564): Child 4368: retrieved 2 listeners from parent [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Acquired the start mutex. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting 64 worker threads. [Mon Jan 10 14:53:25 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 4368 [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 49159. [Mon Jan 10 14:53:25 2011] [notice] Child 4368: Starting thread to listen on port 80. [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(379): [client 192.168.0.50] [4368] auth_ldap authenticate: using URL ldap://mydc/dc=mydomain,dc=com?sAMAccountName?sub [Mon Jan 10 14:53:25 2011] [debug] mod_authnz_ldap.c(484): [client 192.168.0.50] [4368] auth_ldap authenticate: accepting dylan.beattie [Mon Jan 10 14:53:25 2011] [info] [client 192.168.0.50] Access granted: 'dylan.beattie' PROPFIND webdrive:/ [Mon Jan 10 14:53:28 2011] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xcd4f90 rmm=0xcd4fc0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [notice] Apache/2.2.16 (Win32) DAV/2 SVN/1.6.13 configured -- resuming normal operations [Mon Jan 10 14:53:28 2011] [notice] Server built: Oct 4 2010 19:55:36 [Mon Jan 10 14:53:28 2011] [notice] Parent: Created child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(487): Parent: Sent the scoreboard to the child [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [debug] util_ldap.c(1990): LDAP merging Shared Cache conf: shm=0xda2bb0 rmm=0xda2be0 for VHOST: myserver.mydomain.com [Mon Jan 10 14:53:28 2011] [info] APR LDAP: Built with Microsoft Corporation. LDAP SDK [Mon Jan 10 14:53:28 2011] [info] LDAP: SSL support unavailable: LDAP: CA certificates cannot be set using this method, as they are stored in the registry instead. [Mon Jan 10 14:53:28 2011] [error] python_init: Python version mismatch, expected '2.5', found '2.5.4'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python executable found 'C:\\Program Files\\Subversion\\bin\\httpd.exe'. [Mon Jan 10 14:53:28 2011] [error] python_init: Python path being used 'C:\\Program Files\\Subversion\\Python25\\python25.zip;C:\\Program Files\\Subversion\\Python25\\\\DLLs;C:\\Program Files\\Subversion\\Python25\\\\lib;C:\\Program Files\\Subversion\\Python25\\\\lib\\plat-win;C:\\Program Files\\Subversion\\Python25\\\\lib\\lib-tk;C:\\Program Files\\Subversion\\bin'. [Mon Jan 10 14:53:28 2011] [notice] mod_python: Creating 8 session mutexes based on 0 max processes and 64 max threads. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Child process is running [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(408): Child 5440: Retrieved our scoreboard from the parent. [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 288 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [info] Parent: Duplicating socket 276 and sending it to child process 5440 [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(564): Child 5440: retrieved 2 listeners from parent [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Acquired the start mutex. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting 64 worker threads. [Mon Jan 10 14:53:28 2011] [debug] mpm_winnt.c(605): Parent: Sent 2 listeners to child 5440 [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 49159. [Mon Jan 10 14:53:28 2011] [notice] Child 5440: Starting thread to listen on port 80. Browsing http://myserver/webdrive/ from a web browser is working fine, and I have a similar set-up working perfectly on a different SVN server that isn't running Collabnet but has had Subversion and Apache installed and configured separately. Any ideas? The python version error might be red herring - I've seen it in a couple of places in the log files and in other scenarios it doesn't appear to be breaking anything...

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Globe Trotters: Asian Healthcare CIOs need ‘Security Inside Out’ Approach

    - by Tanu Sood
    In our second edition of Globe trotters, wanted to share a feature article that was recently published in Enterprise Innovation. EnterpriseInnovation.net, part of Questex Media Group, is Asia's premier business and technology publication. The article featured MOH Holdings (a holding company of Singapore’s Public Healthcare Institutions) and highlighted the project around National Electronic Health Record (NEHR) system currently being deployed within Singapore.  According to the feature, the NEHR system was built to facilitate seamless exchanges of medical information as patients move across different healthcare settings and to give healthcare providers more timely access to patient’s healthcare records in Singapore. The NEHR consolidates all clinically relevant information from patients’ visits across the healthcare system throughout their lives and pulls them in as a single record. It allows for data sharing, making it accessible to authorized healthcare providers, across the continuum of care throughout the country. In healthcare, patient data privacy is critical as is the need to avoid unauthorized access to the electronic medical records. As Alan Dawson, director for infrastructure and operations at MOH Holdings is quoted in the feature, “Protecting the perimeter is no longer enough. Healthcare CIOs today need to adopt a ‘security inside out’ approach that protects information assets all the way from databases to end points.” Oracle has long advocated the ‘Security Inside Out’ approach. From operating systems, infrastructure to databases, middleware all the way to applications, organizations need to build in security at every layer and between these layers. This comprehensive approach to security has never been as important as it is today in the social, mobile, cloud (SoMoClo) world. To learn more about Oracle’s Security Inside Out approach, visit our Security page. And for more information on how to prevent unauthorized access, streamline user administration, bolster security and enforce compliance in healthcare, learn more about Oracle Identity Management.

    Read the article

  • Convert Excel File 'xls' to CSV, CAUTION: Bumps Ahead

    - by faizanahmad
    The task was to provide users with an interface where they can upload the 'csv' files, these files were to be processed and loaded to Database by a Console application. The code in Console application could not handle the 'xls' files so we thought, OK, lets convert 'xls' to 'csv' in the code, Seemed like fun. The idea was to convert it right after uploading within 'csv' file. As Microsoft does not recommend using the  Excel objects in ASP.NET, we decided to use the Jet engine to open xls. (Ace driver is used for xlsx) The code was pretty straight, can be found on following links: http://www.c-sharpcorner.com/uploadfile/yuanwang200409/102242008174401pm/1.aspx http://www.devasp.net/net/articles/display/141.html FIRST BUMP 'OleDbException (0x80004005): Unspecified error' ( Impersonation ): The ablove code ran fine in my test web site and test console application, but it gave an 'OleDbException (0x80004005): Unspecified error' in main web site, turns out imperonation was set to True and as soon as I changed it to False, it did work. on My XP box, web site was running under user                   'ASPNET'  with imperosnation set to FALSE                   'IUSR_*' i.e IIS guest user with impersonation set to TRUE The weired part was that both users had same rights on the folders I was saving files to and on Excel app in DCOM Config.  We decided to give it a try on Windows Server 2003 with web site set to windows authentication ( impersonation = true ) and yes it did work. SECOND BUMP 'External table not in correct format': I got this error with some files and it appeared that the file from client has some metadata issues  ( when I opened the file in Excel and try to save it ,excel  would give me this error saying File can not be saved in current format ) and the error was caused by that. Some people were able to reslove the error by using "Extended Properties=HTML Import;" in connection string. But it did not work for me. We decided to detour from here and use Excel object :( as we had no control on client setting the meta deta of Excel files. Before third bump there were a ouple of small thingies like 'Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80070005' Fix can be found at http://blog.crowe.co.nz/archive/2006/03/02/589.aspx THIRD BUMP ( Could not get rid of the EXCEL process  ):  I has all the code in place to 'Quiet' the excel, but, it just did not work. work around was done to Kill the process as we knew no other application on server was using EXCEL.  The normal steps to quite the excel application worked just fine in console application though.   FOURTH BUMP: Code worked with one file 1 on my machine and with the other file 2 code will break. and the same code will work perfectly fine with file 2 on some other machine . We moved it to QA  ( Windows Server 2003 )and worked with every file just perfect. But , then there was another problem: one user can upload it and second cant, permissions on folder and DCOM Conifg checked. Another Detour: Uplooad the xls as it is and convert in Console application.   Lesson Learnt:  If its 'xlsx' use 'ACE Driver' or read xml within excel as recommneded by MS. If xls and you know its always going to be properly formatted  'jet Engine'  Code: Imports Microsoft.Office.Interop Private Function ConvertFile(ByVal SourceFolder As String, ByVal FileName As String, ByVal FileExtension As String)As Boolean     Dim appExcel As New Excel.Application     Dim workBooks As Excel.Workbooks = appExcel.Workbooks     Dim objWorkbook As Excel.Workbook      Try                   objWorkbook = workBooks.Open(CompleteFilePath )                            objWorkbook.SaveAs(Filename:=CObj(SourceFolder & FileName & ".csv"), FileFormat:=Excel.XlFileFormat.xlCSV)       Catch ex As Exception         GenerateAlert(ex.Message().Replace("'", "") & " Error Converting File to CSV.")         LogError(ex )         Return False      Finally                      If Not(objWorkbook is Nothing) then               objWorkbook.Close(SaveChanges:=CObj(False))           End If           ReleaseObj(objWorkbook)                                      ReleaseObj(workBooks)           appExcel.Quit()           ReleaseObj(appExcel)                                 Dim proc As System.Diagnostics.Process           For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")               proc.Kill()           Next         DeleteSourceFile(SourceFolder & FileName & FileExtension)     End Try  Return True  End Function   Private Sub ReleaseObj(ByVal o As Object)     Try      System.Runtime.InteropServices.Marshal.ReleaseComObject(o)   Catch ex As Exception           LogError(ex )   Finally      o = Nothing    End Try End Sub     Protected Sub DeleteSourceFile(Byval CompleteFilePath As string)         Try             Dim MyFile As FileInfo = New FileInfo(CompleteFilePath)             If  MyFile.Exists Then                 File.Delete(CompleteFilePath)             Else              Throw New FileNotFoundException()             End If         Catch ex As Exception             GenerateAlert( " Source File could not be deleted.")              LogError(ex)         End Try     End Sub  The code to kill the process ( Avoid it if you can ): Dim proc As System.Diagnostics.Process For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")     proc.Kill() Next

    Read the article

  • Oracle OpenWorld Series: Amit Zavery’s General Session

    - by Michelle Kimihira
    Join Amit Zavery, Vice President of Fusion Middleware Product Management in this strategy and roadmap session for Fusion Middleware, Innovation Platform for Oracle Apps, including Oracle Fusion Applications (GEN9504) on Monday, October 1st at 10:45 AM – 11:45 AM in Moscone West, 3002/3003. Learn the value of Oracle Fusion Applications’ architecture and the role of Oracle Application Development Framework, Oracle SOA Suite, Oracle Business Intelligence, Oracle WebCenter, and Oracle Identity Management. Hear how customers like Boeing and Electronic Art have implemented Oracle Fusion Middleware to improve productivity and lower IT costs today with Oracle Applications and lay a foundation for business innovation. Boeing, world’s largest aerospace company will talk about their need to automate, streamline, and standardize a common process for Order Capture through Orchestration and Financial/ Contract Closeout activities, while dramatically reducing costs. Electronic Art, leading global interactive entertainment software company will talk about their challenge with overwhelming amount of data arriving in different formats and their need to rationalize their architecture to handle this transformation. Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • H12 timeout error on Heroku

    - by snowangel
    Can anyone shed some light on what's causing this timeout error on Heroku (at 2012-07-08T08:58:33+00:00)? The docs say that it's because of some long running process. I've set config.assets.initialize_on_precompile = false in config/application.rb. EmBP-2:bc Emma$ heroku restart Restarting processes... done EmBP-2:bc Emma$ heroku logs --tail 2012-07-08T08:47:21+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.1" 200 311723 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:47:21+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:47:21 +0000] "GET /assets/application.js HTTP/1.0" 200 1311615 "https://codicology.co.uk/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:51:32+00:00 heroku[slugc]: Slug compilation started 2012-07-08T08:54:05+00:00 heroku[api]: Release v145 created by [email protected] 2012-07-08T08:54:05+00:00 heroku[api]: Deploy 8814b2f by [email protected] 2012-07-08T08:54:05+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:54:06+00:00 heroku[slugc]: Slug compilation finished 2012-07-08T08:54:09+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:54:09+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 22429 -c ./config/unicorn.rb` 2012-07-08T08:54:10+00:00 app[worker.1]: [Worker(host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1)] Exiting... 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.320616 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376765 #1] INFO -- : master complete 2012-07-08T08:54:11+00:00 app[web.1]: I, [2012-07-08T08:54:11.376272 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011695 #1] INFO -- : worker=0 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.011386 #1] INFO -- : listening on addr=0.0.0.0:22429 fd=3 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.017917 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.019309 #1] INFO -- : master process ready 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.018250 #5] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.016768 #1] INFO -- : worker=1 spawning... 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020863 #8] INFO -- : Refreshing Gem list 2012-07-08T08:54:12+00:00 app[web.1]: I, [2012-07-08T08:54:12.020617 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:54:12+00:00 app[worker.1]: SQL (2.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:2046e0bf-e109-40f2-abdb-10f69d224483 pid:1') 2012-07-08T08:54:12+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:54:13+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:54:14+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:54:14+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:54:20+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:54:20+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:28+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:33+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:54:34+00:00 app[web.1]: 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:34+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:41+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:54:45+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:54:46+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:47+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.467693 #8] INFO -- : worker=1 ready 2012-07-08T08:54:48+00:00 app[web.1]: I, [2012-07-08T08:54:48.823800 #5] INFO -- : worker=0 ready 2012-07-08T08:54:48+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:54:48+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:54:48+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1 2012-07-08T08:54:48+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:54:49+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Starting job worker 2012-07-08T08:57:54+00:00 heroku[web.1]: State changed from up to starting 2012-07-08T08:57:56+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047386 #1] INFO -- : reaped #<Process::Status: pid 5 exit 0> worker=0 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047753 #1] INFO -- : reaped #<Process::Status: pid 8 exit 0> worker=1 2012-07-08T08:57:57+00:00 app[web.1]: I, [2012-07-08T08:57:57.047999 #1] INFO -- : master complete 2012-07-08T08:57:57+00:00 heroku[worker.1]: Stopping all processes with SIGTERM 2012-07-08T08:57:58+00:00 heroku[web.1]: Process exited with status 0 2012-07-08T08:57:58+00:00 app[worker.1]: [Worker(host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1)] Exiting... 2012-07-08T08:57:59+00:00 heroku[web.1]: Starting process with command `bundle exec unicorn -p 29766 -c ./config/unicorn.rb` 2012-07-08T08:58:01+00:00 app[worker.1]: SQL (27.9ms) UPDATE "delayed_jobs" SET locked_by = null, locked_at = null WHERE (locked_by = 'host:1eabe514-7ec9-43b0-835b-ff3bd23bc266 pid:1') 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070527 #1] INFO -- : listening on addr=0.0.0.0:29766 fd=3 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.070782 #1] INFO -- : worker=0 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.074498 #1] INFO -- : worker=1 spawning... 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.075702 #1] INFO -- : master process ready 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076732 #5] INFO -- : worker=0 spawned pid=5 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.076957 #5] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089022 #8] INFO -- : worker=1 spawned pid=8 2012-07-08T08:58:02+00:00 app[web.1]: I, [2012-07-08T08:58:02.089299 #8] INFO -- : Refreshing Gem list 2012-07-08T08:58:02+00:00 heroku[worker.1]: Process exited with status 0 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from up to down 2012-07-08T08:58:02+00:00 heroku[worker.1]: State changed from down to starting 2012-07-08T08:58:02+00:00 heroku[web.1]: State changed from starting to up 2012-07-08T08:58:10+00:00 heroku[worker.1]: Starting process with command `bundle exec rake jobs:work` 2012-07-08T08:58:11+00:00 heroku[worker.1]: State changed from starting to up 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:28+00:00 app[worker.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/Rakefile:10) 2012-07-08T08:58:33+00:00 heroku[router]: Error H12 (Request timeout) -> GET codicology.co.uk/ dyno=web.1 queue= wait= service=30000ms status=503 bytes=0 2012-07-08T08:58:33+00:00 heroku[nginx]: 127.0.0.1 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.0" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:33+00:00 heroku[nginx]: 82.69.50.215 - - [08/Jul/2012:08:58:33 +0000] "GET / HTTP/1.1" 503 601 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7" codicology.co.uk 2012-07-08T08:58:42+00:00 app[worker.1]: New Relic Agent not running. 2012-07-08T08:58:42+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] New Relic Ruby Agent Monitoring DJ worker host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1 2012-07-08T08:58:42+00:00 app[worker.1]: Starting the New Relic Agent. 2012-07-08T08:58:42+00:00 app[worker.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:58:43+00:00 app[worker.1]: [Worker(host:b5fa9243-6f9b-4de4-8f64-adab767fe4b0 pid:1)] Starting job worker 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:58:56+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:6) 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:02+00:00 app[web.1]: Starting the New Relic Agent. 2012-07-08T08:59:02+00:00 app[web.1]: Installed New Relic Browser Monitoring middleware 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:03+00:00 app[web.1]: [DEVISE] Devise.use_salt_as_remember_token is deprecated and has no effect. Please remove it. 2012-07-08T08:59:03+00:00 app[web.1]: 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant XLSX 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:04+00:00 app[web.1]: /app/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.3/lib/action_dispatch/http/mime_type.rb:102: warning: already initialized constant PDF 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:22+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importadvancecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpaymentcsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importpurchasecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Importsalecsv class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for csv with :s3_eu_url. This will clash with attachment defined in Profitarchive class 2012-07-08T08:59:23+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:24+00:00 app[web.1]: [paperclip] Duplicate URL for xml with :s3_eu_url. This will clash with attachment defined in Onixarchive class 2012-07-08T08:59:25+00:00 app[web.1]: I, [2012-07-08T08:59:25.555052 #5] INFO -- : worker=0 ready 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: 2012-07-08T08:59:25+00:00 app[web.1]: Started GET "/" for 82.69.50.215 at 2012-07-08 08:59:25 +0000 2012-07-08T08:59:26+00:00 app[web.1]: Processing by PagesController#home as HTML 2012-07-08T08:59:26+00:00 app[web.1]: I, [2012-07-08T08:59:26.043501 #8] INFO -- : worker=1 ready 2012-07-08T08:59:26+00:00 app[web.1]: Rendered pages/home.html.haml within layouts/application (5.7ms) 2012-07-08T08:59:26+00:00 app[web.1]: (1.1ms) SELECT COUNT(*) FROM "delayed_jobs" 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_header.html.erb (4.2ms) 2012-07-08T08:59:26+00:00 app[web.1]: Rendered layouts/_footer.html.haml (1.4ms) 2012-07-08T08:59:26+00:00 app[web.1]: Completed 200 OK in 326ms (Views: 258.4ms | ActiveRecord: 65.2ms)

    Read the article

  • Upgrading to Oracle Enterprise Manager 12c Release 2: Top Tips One Must Know

    - by AnkurGupta
    Recently Oracle announced incremental release of Enterprise Manager 12c called Enterprise Manager 12c Release 2 (EM12c R2) which includes several new exciting features (Press announcement). Right before the official release, we upgraded an internal production site from EM 12c R1 to EM 12c R2 and had an extremely pleasant experience. Let me share few key takeaways as well as few tips from this upgrade exercise. I - Why Should You Upgrade To Enterprise Manager 12c Release 2 While an upgrade is usually recommended primarily to take benefit of the latest features (which is valid for this upgrade as well), I found several other compelling reasons purely from deployment perspective. Standardize your EM deployment:  Enterprise Manager comprises of several different components (OMS, agents, plug-ins, etc) and it might be possible that these are at varied patch levels in your environment. For instance, in case of an environment containing Bundle Patch 1 (customer announcement), there is a good chance that you may not have all the components up-to-date. There are two possible reasons. Bundle Patch 1 involved patching different components (OMS, agents, plug-ins) with multiple one-off patches which may not have been applied to all components yet. Bundle Patch 1 for different platforms were not released together. Which means you may not have got the chance to patch all the components on different platforms. Note: BP1 patches are not mandatory to upgrade to EM12c R2 release EM 12c R2 provides an excellent opportunity to standardize your Cloud Control environment (OMS, repository and agents) and plug-ins to latest versions in single shot. All platform releases are made available simultaneously: For the very first time in the history of EM release, all the platforms were released on day one itself, which means you do not need to wait for platform specific binaries for EM OMS or Agent to perform install or upgrades in a heterogeneous environment. Highly refined and automated process – Upgrade process is by far the smoothest and the cleanest as compared to previous releases of Enterprise manager. Following are the ones that stand out. Automatic Plug-in management – Plug-in upgrade along with new plug-in deployment is supported in upgrade installer wizard which means bulk of the updates to OMS and repository can be done in the same workflow. Saves time and minimizes user inputs. Plug-in Upgrade or Migrate Auto Update: While doing the OMS and repository upgrade, you can use Auto Update screen in Oracle Universal Installer to check for any updates/patches. That will help you to avoid the know issues and will make sure that your upgrade is successful. Allows mass upgrade of EM Agents – A new dedicated menu has been added in the EM console for agent upgrade. Agent upgrade workflow is extremely simple that requires agent name as the only input. ADM / JVMD Manager/Agent upgrade – complete process is supported via UI screens. EM12c R2 Upgrade Guide is much simpler to follow as compared to those for earlier releases. This is attributed to the simpler upgrade process. Robust and Performing Platform: EM12c R2 release not only includes several new features, but also provides a more stable platform which incorporates several fixes and enhancements in the Enterprise Manager framework. II - Few Tips To Remember In my last post (blog link) I shared few tips and tricks from my experience applying the Bundle Patch. Recently I upgraded the same site to EM 12c R2 and found few points that you must take note of, while planning this upgrade. The tips below are also applicable to EM 12c R1 environments that do not have Bundle Patch 1 patches applied. Verify the monitored application certification – Specific targets like E-Business Suite have not yet been certified as managed target in EM 12c R2. Therefore make sure to recheck the Enterprise Manager certification Matrix on My Oracle Support before planning the upgrade. Plan downtime – Because EM 12c R2 is an incremental release of EM 12c, for EM 12c R1 to EM 12c R2 upgrade supports only 1-system upgrade approach, which mean there will be downtime. OMS name change after upgrade – In case of multi OMS environments, additional OMS is renamed after upgrade, which has few implications when you upgrade JVMD and ADP agents on OMS. This is well documented in upgrade guide but make sure you read through all the notes. Upgrading BI Publisher– EM12c R2 is certified with BI Publisher 11.1.1.6.0 only. Therefore in case you are using EM 12c R1 which is integrated with BI Publisher 11.1.1.5.0, you must upgrade the BI Publisher to 11.1.1.6.0. Follow the steps from Advanced Installation and Configuration Guide here. Perform Post upgrade Tasks – Make sure to perform post upgrade steps mentioned in documentation here. These include critical changes that must be done right after upgrade to get the right configuration. For instance Database plug-in should be upgraded to Revision 3 (12.1.0.2.0 [u120804]). Delete old OMS Home – EM12c R1 to EM12c R2 is an out of place upgrade, which means it creates a new oracle home for OMS, plug-ins, etc. Therefore please ensure that You have sufficient extra space for new OMS before starting the upgrade process. You clean up the old OMS home after the upgrade process. Steps are available here. DO NOT remove the agent home on OMS host, because agent is upgraded in-place. If you have standby OMS setup then do look into the steps to upgrade the standby OMS from the upgrade guide before going ahead. Read the right documentation – Make sure to follow the Upgrade guide which provides the most comprehensive information on EM12c R2 upgrade process. Additionally you can refer other resources to get familiar with upgrade concepts. Recorded webcast - Oracle Enterprise Manager Cloud Control 12c Release 2 Installation and Upgrade Overview Presentation - Understanding Enterprise Manager 12.1.0.2 Upgrade We are very excited about this latest release and will look forward to hear back any feedback from your upgrade experience!

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • SSAS Maestro Training in July 2012 #ssasmaestro #ssas

    - by Marco Russo (SQLBI)
    A few hours ago Chris Webb blogged about SSAS Maestro and I’d like to propagate the news, adding also some background info. SSAS Maestro is the premier certification on Analysis Services that selects the best experts in Analysis Services around the world. In 2011 Microsoft organized two rounds of training/exams for SSAS Maestros and up to now only 11 people from the first wave have been announced – around 10% of attendees of the course! In the next few days the new Maestros from the second round should be announced and this long process is caused by many factors that I’m going to explain. First, the course is just a step in the process. Before the course you receive a list of topics to study, including the slides of the course. During the course, students receive a lot of information that might not have been included in the slides and the best part of the course is class interaction. Students are expected to bring their experience to the table and comparing case studies, experiences and having long debates is an important part of the learning process. And it is also a part of the evaluation: good questions might be also more important than good answers! Finally, after the course, students have their homework and this may require one or two months to be completed. After that, a long (very long) evaluation process begins, taking into account homework, labs, participation… And for this reason the final evaluation may arrive months later after the course. We are going to improve and shorten this process with the next courses. The first wave of SSAS Maestro had been made by invitation only and now the program is opening, requiring a fee to participate in order to cover the cost of preparation, training and exam. The number of attendees will be limited and candidates will have to send their CV in order to be admitted to the course. Only experienced Analysis Services developers will be able to participate to this challenging program. So why you should do that? Well, only 10% of students passed the exam until now. So if you need 100% guarantee to pass the exam, you need to study a lot, before, during and after the course. But the course by itself is a precious opportunity to share experience, create networking and learn mission-critical enterprise-level best practices that it’s hard to find written on books. Oh, well, many existing white papers are a required reading *before* the course! The course is now 5 days long, and every day can be *very* long. We’ll have lectures and discussions in the morning and labs in the afternoon/evening. Plus some more lectures in one or two afternoons. A heavy part of the course is about performance optimization, capacity planning, monitoring. This edition will introduce also Tabular models, and don’t expect something you might find in the SSAS Tabular Workshop – only performance, scalability monitoring and optimization will be covered, knowing Analysis Services is a requirement just to be accepted! I and Chris Webb will be the teachers for this edition. The course is expensive. Applying for SSAS Maestro will cost around 7000€ plus taxes (reduced to 5000€ for students of a previous SSAS Maestro edition). And you will be locked in a training room for the large part of the week. So why you should do that? Well, as I said, this is a challenging course. You will not find the time to check your email – the content is just too much interesting to think you can be distracted by something else. Another good reason is that this course will take place in Italy. Well, the course will take place in the brand new Microsoft Innovation Campus, but in general we’ll be able to provide you hints to get great food and, if you are willing to attach one week-end to your trip, there are plenty of places to visit (and I’m not talking about the classic Rome-Florence-Venice) – you might really need to relax after such a week! Finally, the marking process after the course will be faster – we’d like to complete the evaluation within three months after the course, considering that 1-2 months might be required to complete the homework. If at this point you are not scared: registration will open in mid-April, but you can already write to [email protected] sending your CV/resume and a short description of your level of SSAS knowledge and experience. The selection process will start early and you may want to put your admission form on top of the FIFO queue!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >