Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 117/976 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Write, Read and Update Oracle CLOBs with PL/SQL

    - by robertphyatt
    Fun with CLOBS! If you are using Oracle, if you have to deal with text that is over 4000 bytes, you will probably find yourself dealing with CLOBs, which can go up to 4GB. They are pretty tricky, and it took me a long time to figure out these lessons learned. I hope they will help some down-trodden developer out there somehow. Here is my original code, which worked great on my Oracle Express Edition: (for all examples, the first one writes a new CLOB, the next one Updates an existing CLOB and the final one reads a CLOB back) CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS    lob_loc  CLOB; BEGIN    SELECT CLOBHOLDERDDOC INTO lob_loc    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id;    p_clob := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1)); END; / As you can see, I had originally been casting everything back and forth between RAW formats using the UTL_RAW.CAST_TO_VARCHAR2() and UTL_RAW.CAST_TO_RAW() functions all over the place, but it had the nasty side effect of working great on my Oracle express edition on my developer box, but having all the CLOBs above a certain size display garbage when read back on the Oracle test database server . So...I kept working at it and came up with the following, which ALSO worked on my Oracle Express Edition on my developer box:   CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (     p_document      IN VARCHAR2,     p_id        OUT NUMBER) IS       lob_loc CLOB; BEGIN     INSERT INTO TBL_CLOBHOLDERDOC (CLOBHOLDERDOC)         VALUES (empty_CLOB())         RETURNING CLOBHOLDERDOC, CLOBHOLDERDOCID INTO lob_loc, p_id;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document);   END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (     p_document      IN VARCHAR2,     p_id        IN NUMBER) IS       lob_loc CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc FROM TBL_CLOBHOLDERDOC     WHERE CLOBHOLDERDOCID = p_id FOR UPDATE;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (     p_id IN NUMBER,     p_clob OUT VARCHAR2) IS     lob_loc  CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc     FROM   TBL_CLOBHOLDERDOC     WHERE  CLOBHOLDERDOCID = p_id;     p_clob := DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1); END; / Unfortunately, by changing my code to what you see above, even though it kept working on my Oracle express edition, everything over a certain size just started truncating after about 7950 characters on the test server! Here is what I came up with in the end, which is actually the simplest solution and this time worked on both my express edition and on the database server (note that only the read function was changed to fix the truncation issue, and that I had Oracle worry about converting the CLOB into a VARCHAR2 internally): CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS BEGIN    SELECT CLOBHOLDERDDOC INTO p_clob    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id; END; /   I hope that is useful to someone!

    Read the article

  • Can notes/to-dos in code comments sent to code-reviews result in an effective refactoring process?

    - by dukeofgaming
    I want to start/improve a culture of collective code ownership at my company but at a geographically distributed level... I'd say there is some current collective code-ownership mentality, but only at single geographical sites. This is a follow-up to this question: What is the politically correct way of refactoring other's code? I'm just wondering if submitting *just code comments* for code reviews (we have ReviewBoard, possibly upgrading to Crucible) could actually be an effective mechanism to get the conversation started on improving code, without having others feel territorial about their code. For example, if I add: //ToDo: Refactor this code and that code because of reasons X and Y Then, submit it for code review, and it gets accepted... it could be considered as an agreement (which I think is sometimes harder to get with new code up front). At the same time, the author (and others) might have an easier time digesting and accepting the proposal; rejecting a proposal because it might break things will not longer be a valid reason and therefore the fear of making a change is lost... and at the same time, do not invest 10 hours optimizing something that no one thinks it is worth it and opposes to it just out of fear. This is all conjecture, but I'm feeling something like this (submitting refactoring notes in code comments at the code-review process) would work. Has anyone done something like this in practice?, if so, what have been the results?

    Read the article

  • Executing a Batch file from remote Ax client on an AOS server

    - by Anisha
    Is it possible to execute a batch file on an AOS server from a remote AX client? Answer is yes, provided you have necessary permission for this execution on the server. Please create a batch file on your AOS server. Some thing as below for creating a directory on the server.    Insert a command something like this in a .BAT file (batch file) and place any were on the server.   Mkdir “c:\test”      Copy the following code into your server static method of your class and call this piece of code from a button click on Ax form. Please execute this button click from a remote AX client and see the result . This should execute the batch file on the server and should create a directory called ‘test’ on the root directoryof the server.     server static void AOS_batch_file_create() { boolean b; System.Diagnostics.Process process; System.Diagnostics.ProcessStartInfo processStartInfo; ; b = Global::isRunningOnServer(); infolog.add(0, int2str(b)); new InteropPermission(InteropKind::ClrInterop).assert(); process = new System.Diagnostics.Process(); processStartInfo = new System.Diagnostics.ProcessStartInfo(); processStartInfo.set_FileName("C:\\create_dir.bat"); // batch file path on the AOS server process.set_StartInfo(processStartInfo); process.Start(); //process.Refresh(); //process.Close(); //process.WaitForExit(); info("Finished"); }

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • How do I install Red5 using apt-get? Getting sub-process error

    - by Dalen
    This is copy from question of some guy on other forum that never got satisfiably answered. I encountered the same error few days ago on Ubuntu 13.04 Desktop. It seems like Red5 is installed but it cannot be run for some reason. Can anyone explain what is going on here? Why should dpkg fail? I mean, this is checked repo, it should work fine. apt-get install red5-server Selecting previously deselected package red5-server. (Reading database ... 53491 files and directories currently installed.) Unpacking red5-server (from .../red5-server_0.9.1-4squeeze1_all.deb) ... Setting up red5-server (0.9.1-4squeeze1) ... Starting Flash streaming server : red5-server failed! invoke-rc.d: initscript red5-server, action "start" failed. dpkg: error processing red5-server (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: red5-server E: Sub-process /usr/bin/dpkg returned an error code (1) Logfile error.log in /usr/share/red5/log was completely empty. Other logs were not but according to them, there were no problems at all.

    Read the article

  • How should I describe the process of learning someone else's code? (In an invoicing situation.)

    - by MattyG
    I have a contract to upgrade some in-house software for a large company. The company has requested multiple feature additions and a few bug fixes. This is my first freelance style job. First, I needed to become familiar with how the application worked - I learnt it as if I was a user. Next, I had to learn how the software worked. I started with broad concepts, and then narrowed down into necessary detail before working on each bug fix and feature. At least at the start of the project, it took me a lot longer to learn the existing code than it did to write the additional features. How can I describe the process of learning the existing code on the invoice? (This part of the company usually does things in-house, so doesn't have much experience dealing with software contractors like me, and I fear they may not understand the overhead of learning someone else's code). I don't want to just tack the learning time onto the actual feature upgrade, because in some cases this would make a 'simple task' look like it took me way too long. I want break the invoice into relevant steps, and communicate that I'm charging for the large overhead of learning someone else's code before being able to add my own to it. Is there a standard way of describing this sort of activity when billing for a job?

    Read the article

  • How to identify process that's sending error messages to terminal?

    - by kjo
    The following error message occasionally appears in my terminal: Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory ...which is pretty annoying. I have searched online for solutions to this error, without success. Is there any way to at least identify the process responsible for sending these error messages to my terminal? EDIT: Let me clarify that, as far as I can tell, these error messages appear "out of the blue". In fact, they appear asynchronously with respect to my interactions with the terminal (more often than not see them for the first time when I return to a terminal window that has been unattended for some time). I'm sure there's a definite, deterministic cause for these messages, but it is not one that I can readily identify. In short, I have not noticed any pattern or regularity to their occurrence. In particular, in my case their occurrence has nothing to do with running mplayer, or any other video playback program. (Please my earlier post about it here.) For one thing, the machine in question is a work machine, and I rarely watch any videos with it. In the very few instances in which I've watched a video on this machine I've used VLC, not mplayer, and these errors never appeared in these rare occasions that I used VLC.

    Read the article

  • How to layout class definition when inheriting from multiple interfaces

    - by gabr
    Given two interface definitions ... IOmniWorkItem = interface ['{3CE2762F-B7A3-4490-BF22-2109C042EAD1}'] function GetData: TOmniValue; function GetResult: TOmniValue; function GetUniqueID: int64; procedure SetResult(const value: TOmniValue); // procedure Cancel; function DetachException: Exception; function FatalException: Exception; function IsCanceled: boolean; function IsExceptional: boolean; property Data: TOmniValue read GetData; property Result: TOmniValue read GetResult write SetResult; property UniqueID: int64 read GetUniqueID; end; IOmniWorkItemEx = interface ['{3B48D012-CF1C-4B47-A4A0-3072A9067A3E}'] function GetOnWorkItemDone: TOmniWorkItemDoneDelegate; function GetOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; procedure SetOnWorkItemDone(const Value: TOmniWorkItemDoneDelegate); procedure SetOnWorkItemDone_Asy(const Value: TOmniWorkItemDoneDelegate); // property OnWorkItemDone: TOmniWorkItemDoneDelegate read GetOnWorkItemDone write SetOnWorkItemDone; property OnWorkItemDone_Asy: TOmniWorkItemDoneDelegate read GetOnWorkItemDone_Asy write SetOnWorkItemDone_Asy; end; ... what are your ideas of laying out class declaration that inherits from both of them? My current idea (but I don't know if I'm happy with it): TOmniWorkItem = class(TInterfacedObject, IOmniWorkItem, IOmniWorkItemEx) strict private FData : TOmniValue; FOnWorkItemDone : TOmniWorkItemDoneDelegate; FOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; FResult : TOmniValue; FUniqueID : int64; strict protected procedure FreeException; protected //IOmniWorkItem function GetData: TOmniValue; function GetResult: TOmniValue; function GetUniqueID: int64; procedure SetResult(const value: TOmniValue); protected //IOmniWorkItemEx function GetOnWorkItemDone: TOmniWorkItemDoneDelegate; function GetOnWorkItemDone_Asy: TOmniWorkItemDoneDelegate; procedure SetOnWorkItemDone(const Value: TOmniWorkItemDoneDelegate); procedure SetOnWorkItemDone_Asy(const Value: TOmniWorkItemDoneDelegate); public constructor Create(const data: TOmniValue; uniqueID: int64); destructor Destroy; override; public //IOmniWorkItem procedure Cancel; function DetachException: Exception; function FatalException: Exception; function IsCanceled: boolean; function IsExceptional: boolean; property Data: TOmniValue read GetData; property Result: TOmniValue read GetResult write SetResult; property UniqueID: int64 read GetUniqueID; public //IOmniWorkItemEx property OnWorkItemDone: TOmniWorkItemDoneDelegate read GetOnWorkItemDone write SetOnWorkItemDone; property OnWorkItemDone_Asy: TOmniWorkItemDoneDelegate read GetOnWorkItemDone_Asy write SetOnWorkItemDone_Asy; end; As noted in answers, composition is a good approach for this example but I'm not sure it applies in all cases. Sometimes I'm using multiple inheritance just to split read and write access to some property into public (typically read-only) and private (typically write-only) part. Does composition still apply here? I'm not really sure as I would have to move the property in question out from the main class and I'm not sure that's the correct way to do it. Example: // public part of the interface interface IOmniWorkItemConfig = interface function OnExecute(const aTask: TOmniBackgroundWorkerDelegate): IOmniWorkItemConfig; function OnRequestDone(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; function OnRequestDone_Asy(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; end; // private part of the interface IOmniWorkItemConfigEx = interface ['{42CEC5CB-404F-4868-AE81-6A13AD7E3C6B}'] function GetOnExecute: TOmniBackgroundWorkerDelegate; function GetOnRequestDone: TOmniWorkItemDoneDelegate; function GetOnRequestDone_Asy: TOmniWorkItemDoneDelegate; end; // implementing class TOmniWorkItemConfig = class(TInterfacedObject, IOmniWorkItemConfig, IOmniWorkItemConfigEx) strict private FOnExecute : TOmniBackgroundWorkerDelegate; FOnRequestDone : TOmniWorkItemDoneDelegate; FOnRequestDone_Asy: TOmniWorkItemDoneDelegate; public constructor Create(defaults: IOmniWorkItemConfig = nil); public //IOmniWorkItemConfig function OnExecute(const aTask: TOmniBackgroundWorkerDelegate): IOmniWorkItemConfig; function OnRequestDone(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; function OnRequestDone_Asy(const aTask: TOmniWorkItemDoneDelegate): IOmniWorkItemConfig; public //IOmniWorkItemConfigEx function GetOnExecute: TOmniBackgroundWorkerDelegate; function GetOnRequestDone: TOmniWorkItemDoneDelegate; function GetOnRequestDone_Asy: TOmniWorkItemDoneDelegate; end;

    Read the article

  • What is the procedure to replace a failing hard drive in a RAID array?

    - by slayton
    3 years ago a co-worker setup a software RAID-6 array on Ubuntu 9.04 and I'm getting messages from the OS that the drive has bad sectors and should be replaced. I'd like to remove this drive and replace it with a new drive, however, I have never done this before and I'm terrified that in the process of fixing the array I'm going to end up ruining it. I know the device ID of the array and I know the device IDs of the individual drives in the array. Additionally I physically have the bad drive. What are the steps to replace the bad drive with a new drive and get the array running again?

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • Stored Procedure To Search the AccessRights given to the Users.

    - by thevan
    Hi, I want to display the Access Rights given to the Users for the particular module. I have Seven Tables such as RoleAccess, Roles, Functions, Module, SubModule, Company and Unit. RoleAccess is the Main Table. The AccessRights given will be stored in the RoleAccess Table only. RoleAccess Table has the following columns such as RoleID, CompanyID, UnitID, FunctionID, ModuleID, SubModuleID, Create, Update, Delete, Read, Approve. Here Create_f, Update_f, Delete_f, Read_f and Approve_f are flags. Company Table has two columns such as CompanyID and CompanyName. Unit Table has three columns such as UnitID, UnitName and CompanyID. Roles Table has four columns such as RoleID, RoleName, CompanyID and UnitID. Module Table has two columns such as ModuleID and ModuleName. SubModule Table has three columns such as ModuleID, SubModuleID, SubModuleName. Functions Table has five columns such as FunctionID, FunctionName, ModuleID and SubModuleID. At First, The RoleAccess Table does not contain any records. So I want to display the ModuleName, SubModuleName, FunctionName, CompanyID, RoleID, UnitID, FunctionID, ModuleID, SubModuleID, Create_f, Update_f, Delete_f, Read_f and Approve_f. If the AccessRights is assigned to the Particular RoleID means the flags in the search results will be 1 else it will be 0. I have witten one stored procedure but it displays the records based on the RoleID stored in the RoleAccess table. But I also want to display the Flags as 0 for the Roles not stored in the RoleAccess Table. I want the Stored Procedure for this. Any one please help me.

    Read the article

  • .NET framework execution aborted while executing CLR stored procedure?

    - by Sean Ochoa
    I constructed a stored procedure that does the equivalent of FOR XML AUTO in SQL Server 2008. Now that I'm testing it, it gives me a really unhelpful error message. What does this error mean? Msg 10329, Level 16, State 49, Procedure ForXML, Line 0 .NET Framework execution was aborted. System.Threading.ThreadAbortException: Thread was being aborted. System.Threading.ThreadAbortException: at System.Runtime.InteropServices.Marshal.PtrToStringUni(IntPtr ptr, Int32 len) at System.Data.SqlServer.Internal.CXVariantBase.WSTRToString() at System.Data.SqlServer.Internal.SqlWSTRLimitedBuffer.GetString(SmiEventSink sink) at System.Data.SqlServer.Internal.RowData.GetString(SmiEventSink sink, Int32 i) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue(SmiEventSink_Default sink, ITypedGettersV3 getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue200(SmiEventSink_Default sink, SmiTypedGetterSetter getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at System.Data.SqlClient.SqlDataReaderSmi.GetValue(Int32 ordinal) at System.Data.SqlClient.SqlDataReaderSmi.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at ForXML.GetXML...

    Read the article

  • How to make a Stored Procedure that takes in XML and uses that xml as an Update + call this stored p

    - by chobo2
    Hi I am using ms sql server 2005 and I want to do a mass update. I am thinking that I might be able to do it with sending an xml document to a stored procedure. So I seen many examples on how to do it for insert CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(CreateDate) SELECT @UpdatedProdData.value('(/ArrayOfUserTable/UserTable/CreateDate)[1]', 'DATETIME') But I am not sure how it would look like for an update. I am also unsure how do I pass in the xml through ado.net? Do I pass it as a string through a parameter or what? I know sqlDataApater has a batch update method but I am using linq to sql. So I rather keep using it. So if this works I would be able to grab all records with linq to sql and have them as objects. Then manipulate the objects and use xml seralization. Finally I could just use ado.net simple to send the xml to the server. This might be slower then the sqlDataAdapter but I am willing to take that hit if I can keep using objects.

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Forensic Analysis of the OOM-Killer

    - by Oddthinking
    Ubuntu's Out-Of-Memory Killer wreaked havoc on my server, quietly assassinating my applications, sendmail, apache and others. I've managed to learn what the OOM Killer is, and about its "badness" rules. While my machine is small, my applications are even smaller, and typically only half of my physical memory is in use, let alone swap-space, so I was surprised. I am trying to work out the culprit, but I don't know how to read the OOM-Killer logs. Can anyone please point me to a tutorial on how to read the data in the logs (what are ve, free and gen?), or help me parse these logs? Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 1, exc 2326 0 goal 2326 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 1, exc 2326 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 2, exc 122 0 goal 383 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 2, exc 383 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 2 Apr 20 20:03:27 EL135 kernel: OOM killed process watchdog (pid=14490, ve=13516) exited, free=43104 gen=24501. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=4457, ve=13516) exited, free=43104 gen=24502. Apr 20 20:03:27 EL135 kernel: OOM killed process ntpd (pid=10816, ve=13516) exited, free=43104 gen=24503. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=27401, ve=13516) exited, free=43104 gen=24504. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29009, ve=13516) exited, free=43104 gen=24505. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=10557, ve=13516) exited, free=49552 gen=24506. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=24983, ve=13516) exited, free=53117 gen=24507. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=29129, ve=13516) exited, free=68493 gen=24508. Apr 20 20:03:27 EL135 kernel: OOM killed process sendmail-mta (pid=941, ve=13516) exited, free=68803 gen=24509. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=12418, ve=13516) exited, free=69330 gen=24510. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=22953, ve=13516) exited, free=72275 gen=24511. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=6624, ve=13516) exited, free=76398 gen=24512. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=23317, ve=13516) exited, free=94285 gen=24513. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29030, ve=13516) exited, free=95339 gen=24514. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=20583, ve=13516) exited, free=101663 gen=24515. Apr 20 20:03:28 EL135 kernel: OOM killed process logger (pid=12894, ve=13516) exited, free=101694 gen=24516. Apr 20 20:03:28 EL135 kernel: OOM killed process bash (pid=21119, ve=13516) exited, free=101849 gen=24517. Apr 20 20:03:28 EL135 kernel: OOM killed process atd (pid=991, ve=13516) exited, free=101880 gen=24518. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=14649, ve=13516) exited, free=102748 gen=24519. Apr 20 20:03:28 EL135 kernel: OOM killed process grep (pid=21375, ve=13516) exited, free=132167 gen=24520. Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 4, exc 4215 0 goal 4826 0... Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 1 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 4, exc 4826 0 red 189481 331 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 2 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 5, exc 3564 0 goal 3564 0... Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 1 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 5, exc 3564 0 red 189481 331 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 2 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 6, exc 8071 0 goal 8071 0... Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 1 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 6, exc 8071 0 red 189481 331 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 2 Watchdog is a watchdog task, that was idle; nothing in the logs to suggest it had done anything for days. Its job is to restart one of the applications if it dies, so a bit ironic that it is the first to get killed. Tail was monitoring a few logs files. Unlikely to be consuming memory madly. The apache web-server only serves pages to a little old lady who only uses it to get to church on Sundays a couple of developers who were in bed asleep, and hadn't visited a page on the site for a few weeks. The only traffic it might have had is from the port-scanners; all the content is password-protected and not linked from anywhere, so no spiders are interested. Python is running two separate custom applications. Nothing in the logs to suggest they weren't humming along as normal. One of them was a relatively recent implementation, which makes suspect #1. It doesn't have any data-structures of any significance, and normally uses only about 8% of the total physical RAW. It hasn't misbehaved since. The grep is suspect #2, and the one I want to be guilty, because it was a once-off command. The command (which piped the output of a grep -r to another grep) had been started at least 30 minutes earlier, and the fact it was still running is suspicious. However, I wouldn't have thought grep would ever use a significant amount of memory. It took a while for the OOM killer to get to it, which suggests it wasn't going mad, but the OOM killer stopped once it was killed, suggesting it may have been a memory-hog that finally satisfied the OOM killer's blood-lust.

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • Tech Talk: Can we put "Simple" in Application Business Process Management?

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Customers are challenged with looking for answers to basic questions: Why can't it be simpler to connect applications in cloud to applications on-premise? How do I make my business more responsive and agile? How do I create end-to-end processes joining multiple applications? Tune in to this Tech Talk session with Amit Zavery, Vice President of Product Management for Oracle Fusion Middleware as he discusses the relevance and importance of business process management and how Oracle BPM Suite is benefiting customers across all industries. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • How can I confirm that burn process was completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. my PC is able to read the CD and CD is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • How can I confirm that burn process is completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. I my PC is able to read the cd and it is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >