Search Results

Search found 11697 results on 468 pages for 'requires sense of humor'.

Page 28/468 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • VB6 ImageList Frm Code Generation

    - by DAC
    Note: This is probably a shot in the dark, and its purely out of curiosity that I'm asking. When using the ImageList control from the Microsoft Common Control lib (mscomctl.ocx) I have found that VB6 generates FRM code that doesn't resolve to real property/method names and I am curious as to how the resolution is made. An example of the generated FRM code is given below with an ImageList containing 3 images: Begin MSComctlLib.ImageList ImageList1 BackColor = -2147483643 ImageWidth = 100 ImageHeight = 45 MaskColor = 12632256 BeginProperty Images {2C247F25-8591-11D1-B16A-00C0F0283628} NumListImages = 3 BeginProperty ListImage1 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":0054 Key = "" EndProperty BeginProperty ListImage2 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":3562 Key = "" EndProperty BeginProperty ListImage3 {2C247F27-8591-11D1-B16A-00C0F0283628} Picture = "Form1.frx":6A70 Key = "" EndProperty EndProperty End From my experience, a BeginProperty tag typically means a compound property (an object) is being assigned to, such as the Font object of most controls, for example: Begin VB.Form Form1 Caption = "Form1" ClientHeight = 10950 ClientLeft = 60 ClientTop = 450 ClientWidth = 7215 BeginProperty Font Name = "MS Serif" Size = 8.25 Charset = 0 Weight = 400 Underline = 0 'False Italic = -1 'True Strikethrough = 0 'False EndProperty End Which can be easily seen to resolve to VB.Form.Font.<Property Name>. With ImageList, there is no property called Images. The GUID associated with property Images indicates type ListImages which implements interface IImages. This type makes sense, as the ImageList control has a property called ListImages which is of type IImages. Secondly, properties ListImage1, ListImage2 and ListImage3 don't exist on type IImages, but the GUID associated with these properties indicates type ListImage which implements interface IImage. This type also makes sense, as IImages is in fact a collection of IImage. What doesn't make sense to me is how VB6 makes these associations. How does VB6 know to make the association between the name Images - ListImages purely because of an associated type (provided by the GUID) - perhaps because it's the only property of that type? Secondly, how does it resolve ListImage1, ListImage2 and ListImage3 into additions to the collection IImages, and does it use the Add method? Or perhaps the ControlDefault property? Perhaps VB6 has specific knowledge of this control and no logical resolution exists?

    Read the article

  • Dojo addOnLoad, but is Dojo loaded?

    - by adamrice32
    I've encountered what seems like a chicken & egg problem, and have what I think is a logical solution. However, it occurred to me that others must have encountered something similar, so I figured I'd float it out there for the masses. The situation is that I want to use dojo's addOnLoad function to queue up a number of callbacks which should be executed after the DOM has completed rendering on the client side. So what I'm doing is as follows: <html> <head> <script type="text/javascript" src="dojo.xd.js"></script> ... </head> <body> ... <script type="text/javascript"> dojo.addOnLoad( ... ); dojo.addOnLoad( ... ); ... </script> </body> </html> Now, the issue is that I seem to be calling dojo.addOnLoad before the entire Dojo library has been downloaded the browser. This makes sense in a way, because the inline SCRIPT contents should be executed before the entire DOM is loaded (and the normal body onload callback is triggered). My question is this - is my approach sound, or would it make more sense to register a normal/standard body onload JavaScript callback to call a function, which does the same work that each of the dojo.addOnLoads is doing in the SCRIPT block. Of course, this begs the question, why would you ever then use dojo.addOnLoad if you're not guaranteed that the Dojo library will be loaded prior to using the library? Hopefully this situation makes sense to someone other than me. Seems like someone else may have encountered this situation. Thoughts? Best Regards, Adam Rice

    Read the article

  • Uncatchable AccesViolationException

    - by Roy
    Hi all, I'm getting close to desperate.. I am developing a field service application for Windows Mobile 6.1 using C# and quite some p/Invoking. (I think I'm referencing about 50 native functions) On normal circumstances this goes without any problem, but when i start stressing the GC i'm getting a nasty 0xC0000005 error witch seems uncatchable. In my test i'm rapidly closing and opening a dialog form (the form did make use of native functions, but for testing i commented these out) and after a while the Windows Mobile error reporter comes around to tell me that there was an fatal error in my application. My code uses a try-catch around the Application.Run(masterForm); and hooks into the CurrentDomain.UnhandledException event, but the application still crashes. Even when i attach the debugger, visual studio just tells me "The remote connection to the device has been lost" when the exception occurs.. Since I didn't succeed to catch the exception in the managed environment, I tried to make sense out of the Error Reporter log file. But this doesn't make any sense, the only consistent this about the error is the application where it occurs in. The thread where the application occurs in is unknown to me, the module where the error occurs differs from time to time (I've seen my application.exe, WS2.dll, netcfagl3_5.dll and mscoree3_5.dll), even the error code is not always the same. (most of the time it's 0xC0000005, but i've also seen an 0X80000002 error, which is a warning accounting the first byte?) I tried debugging through bugtrap, but strangely enough this crashes with the same error code (0xC0000005). I tried to open the kdmp file with visual studio, but i can't seem to make any sense out of this because it only shows me disassembler code when i step into the error (unless i have the right .pbb files, which i don't). Same goes for WinDbg. To make a long story short: I frankly don't have a single clue where to look for this error, and I'm hoping some bright soul on stackoverflow does. I'm happy to provide some code but at this moment I don't know which piece to provide.. Any help is greatly appreciated!

    Read the article

  • Iterator performance contract (and use on non-collections)

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>? Should a type really implement Iterator if it's effectively only using one-third of its API? (i.e. no remove(), always hasNext())

    Read the article

  • Multiple dynamic timers

    - by Rickard
    I am working on a project where I need to let the user create one (or more) timers to fire off an event. The user is supposed to define variables such as if the timer should be active and how often the timer will fire along with some other variables of what will happen when the timer is fiering. All these variables are stored in a dictionary (Disctionary) where the object holds all the variables the user has set and the string is the name that the user has chosen for this timer. I then want my program to loop through this dictionary and search for all objects which has the variable t_Active set to true (this I have already achieved). What I need help with figuring out is the follwoing: When it detects the variable, and if it's set to true, I need the program to see if there is already a timer created for this. If it isn't, it should create one and set the relevant parameters for the timer. The two variables t_num and t_period should decide the interval of the timer. t_num is an int and t_period is a string which will be set to either minutes, hours or days. Combining t_num with 60000 (minutes), 3600000 (hours) or 86400000 should give the corrct interval. But how would I go on about programatically create a timer for each user-defined active object? And how do I get the program to detect wether or not a timer has already been created? I have been searching both here and on google, but so far I haven't come across something that makes sense to me. I am still learning C#, so what make sense to you guys may not neccessarilly make sense to me yet. :) I hope I have explaned what I need good enough, please do ask me to clarify if you don't get me. Edit: Maybe I should also mention that the mentioned dictionary will also be saved to an XML file to that the user can pick up all the settings they made at any time.

    Read the article

  • Alerting for incorrect cell values

    - by Allan
    My problem: I have two ranges R16 and R01. These ranges were set up by swiping each range and then renaming them in the upper left panel of the sheet. Each range requires users to fill in each cell with a value. R16 requires users to enter a number of 0 through 5. The range R01 requires a value of 0 or 1 to be entered. NO cell can be left blank in any cell within these two ranges. These ranges and requirements are specific to this sheet only. It would be nice if at the time of user entering a number, an error message appeared like [invalid entry] if the value inputted was outside parameters set. For example, in R16, if someone entered 12 or -1 they would be alerted. Finally when the user presses a button on the page to use these values in a separate process, it is essential to check that no cell is left blank. I am trying to find a way to halt the running of the marco (via the button) if these parameters above are not met. Thank you

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • Large flags enumerations in C#

    - by LorenVS
    Hey everyone, got a quick question that I can't seem to find anything about... I'm working on a project that requires flag enumerations with a large number of flags (up to 40-ish), and I don't really feel like typing in the exact mask for each enumeration value: public enum MyEnumeration : ulong { Flag1 = 1, Flag2 = 2, Flag3 = 4, Flag4 = 8, Flag5 = 16, // ... Flag16 = 65536, Flag17 = 65536 * 2, Flag18 = 65536 * 4, Flag19 = 65536 * 8, // ... Flag32 = 65536 * 65536, Flag33 = 65536 * 65536 * 2 // right about here I start to get really pissed off } Moreover, I'm also hoping that there is an easy(ier) way for me to control the actual arrangement of bits on different endian machines, since these values will eventually be serialized over a network: public enum MyEnumeration : uint { Flag1 = 1, // BIG: 0x00000001, LITTLE:0x01000000 Flag2 = 2, // BIG: 0x00000002, LITTLE:0x02000000 Flag3 = 4, // BIG: 0x00000004, LITTLE:0x03000000 // ... Flag9 = 256, // BIG: 0x00000010, LITTLE:0x10000000 Flag10 = 512, // BIG: 0x00000011, LITTLE:0x11000000 Flag11 = 1024 // BIG: 0x00000012, LITTLE:0x12000000 } So, I'm kind of wondering if there is some cool way I can set my enumerations up like: public enum MyEnumeration : uint { Flag1 = flag(1), // BOTH: 0x80000000 Flag2 = flag(2), // BOTH: 0x40000000 Flag3 = flag(3), // BOTH: 0x20000000 // ... Flag9 = flag(9), // BOTH: 0x00800000 } What I've Tried: // this won't work because Math.Pow returns double // and because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Math.Pow(2, 0), Flag2 = Math.Pow(2, 1) } // this won't work because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Masks.MyCustomerBitmaskGeneratingFunction(0) } // this is my best solution so far, but is definitely // quite clunkie public struct EnumWrapper<TEnum> where TEnum { private BitVector32 vector; public bool this[TEnum index] { // returns whether the index-th bit is set in vector } // all sorts of overriding using TEnum as args } Just wondering if anyone has any cool ideas, thanks!

    Read the article

  • git rebase without changing commit timestamps

    - by Olivier
    Would it make sense to perform git rebase while preserving the commit timestamps? I believe a consequence would be that the new branch will not necessarily have commit dates chronologically. Is that theoretically possible at all? (e.g. using plumbing commands; just curious here) If it is theoretically possible, then is it possible in practice with rebase, not to change the timestamps? For example, assume I have the following tree: master <jun 2010> | : : : oldbranch <feb 1984> : / oldcommit <jan 1984> Now, if I rebase oldbranch on master, the date of the commit changes from feb 1984 to jun 2010. Is it possible to change that behaviour so that the commit timestamp is not changed? In the end I would thus obtain: oldbranch <feb 1984> / master <jun 2010> | : Would that make sense at all? Is it even allowed in git to have a history where an old commit has a more recent commit as a parent? Edit A crucial question of Von C helped me understand what is going on: when your rebase, the committer's timestamp changes, but not the author's timestamp, which suddenly all makes sense. So my question was actually not precise enough. The answer is that rebase actually doesn't change the author's timestamps (you don't need to do anything for that), which suits me perfectly.

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • Python lists/arrays: disable negative indexing wrap-around

    - by wim
    While I find the negative number wraparound (i.e. A[-2] indexing the second-to-last element) extremely useful in many cases, there are often use cases I come across where it is more of an annoyance than helpful, and I find myself wishing for an alternate syntax to use when I would rather disable that particular behaviour. Here is a canned 2D example below, but I have had the same peeve a few times with other data structures and in other numbers of dimensions. import numpy as np A = np.random.randint(0, 2, (5, 10)) def foo(i, j, r=2): '''sum of neighbours within r steps of A[i,j]''' return A[i-r:i+r+1, j-r:j+r+1].sum() In the slice above I would rather that any negative number to the slice would be treated the same as None is, rather than wrapping to the other end of the array. Because of the wrapping, the otherwise nice implementation above gives incorrect results at boundary conditions and requires some sort of patch like: def ugly_foo(i, j, r=2): def thing(n): return None if n < 0 else n return A[thing(i-r):i+r+1, thing(j-r):j+r+1].sum() I have also tried zero-padding the array or list, but it is still inelegant (requires adjusting the lookup locations indices accordingly) and inefficient (requires copying the array). Am I missing some standard trick or elegant solution for slicing like this? I noticed that python and numpy already handle the case where you specify too large a number nicely - that is, if the index is greater than the shape of the array it behaves the same as if it were None.

    Read the article

  • Data conversion from accelerometer

    - by mrigendra
    Hi all I am working on an accelerometer bma220 , and its datasheet says that data is in 2's complement form.So what i had to do was getting that 8 bit data in any 8 bit signed char and done. the bma220 have an 8 bit register of which first 6 bits are data and last two are zero. void properdata(int16_t *msgData) { printf("\nin proper data\n"); int16_t temp, i; for(i=0; i<3; i++) { temp = *(msgData + i); printf("temp = %d sense = %d\n", temp, sense); temp = temp >> 2; // only 6 bits data temp = temp / sense; //decimal value * .0625 = value in g printf("temp = %d\n", temp); } } in this program i am taking data in a unsigned variable msgdata and doing all the calculations on a signed variable. I just need to know if this is the correct way to convert data?

    Read the article

  • i don't know how to solve this error

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • YUM Update Failed - Error in POSTIN scriptlet in rpm package

    - by Tiffany Walker
    Running "yum update" and it gets to installing and then breaks. Not sure what the problem is. Google shows nothing. Error in POSTIN scriptlet in rpm package gtk2-2.18.9-10.el6.x86_64 error: error creating temporary file /var/tmp/rpm-tmp.NB84HC: Invalid argument error: Couldn't create temporary file for %post(gtk2-2.18.9-10.el6.x86_64): Invalid argument Updating : e2fsprogs-libs-1.41.12-12.el6.x86_64 44/378 Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 387, in callback self._instCloseFile( bytes, total, h ) File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 463, in _instCloseFile self.base.history.trans_data_pid_end(pid, state) File "/usr/lib/python2.6/site-packages/yum/history.py", line 858, in trans_data_pid_end """, ('TRUE', self._tid, pid, state)) File "/usr/lib/python2.6/site-packages/yum/sqlutils.py", line 168, in executeSQLQmark return cursor.execute(query, params) sqlite3.OperationalError: unable to open database file error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x45c2290>> failed, aborting! With a check all: yum check Loaded plugins: fastestmirror, rhnplugin, security MySQL-client-5.5.27-1.cp.1132.x86_64 is obsoleted by MySQL-client-5.5.27-1.cp.1132.x86_64 MySQL-server-5.5.27-1.cp.1132.x86_64 is obsoleted by MySQL-server-5.5.27-1.cp.1132.x86_64 abrt-libs-2.0.8-6.el6.x86_64 is a duplicate with abrt-libs-2.0.4-14.el6.centos.x86_64 audit-libs-2.2-2.el6.x86_64 is a duplicate with audit-libs-2.1.3-3.el6.x86_64 bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) 32:bind-libs-9.8.2-0.10.rc1.el6_3.3.x86_64 is a duplicate with 32:bind-libs-9.7.3-8.P3.el6_2.2.x86_64 cagefs-safebin-3.6-6.el6.cloudlinux.x86_64 is a duplicate with cagefs-safebin-3.5-1.el6.cloudlinux.x86_64 chkconfig-1.3.49.3-2.el6.x86_64 is a duplicate with chkconfig-1.3.49.3-1.el6_2.x86_64 cloudlinux-release-6-6.3.0.x86_64 is a duplicate with cloudlinux-release-6-6.2.2.x86_64 coreutils-8.4-19.el6.x86_64 is a duplicate with coreutils-8.4-16.el6.x86_64 coreutils-libs-8.4-19.el6.x86_64 is a duplicate with coreutils-libs-8.4-16.el6.x86_64 1:cups-libs-1.4.2-48.el6_3.1.x86_64 is a duplicate with 1:cups-libs-1.4.2-44.el6_2.3.x86_64 1:dbus-libs-1.2.24-7.el6_3.x86_64 is a duplicate with 1:dbus-libs-1.2.24-5.el6_1.x86_64 12:dhcp-common-4.1.1-31.P1.el6_3.1.x86_64 is a duplicate with 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 e2fsprogs-libs-1.41.12-12.el6.x86_64 is a duplicate with e2fsprogs-libs-1.41.12-11.el6.x86_64 exim-4.80-0.x86_64 has missing requires of perl(SafeFile) expat-2.0.1-11.el6_2.x86_64 is a duplicate with expat-2.0.1-9.1.el6.x86_64 frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 gawk-3.1.7-10.el6.x86_64 is a duplicate with gawk-3.1.7-9.el6.x86_64 glib2-2.22.5-7.el6.x86_64 is a duplicate with glib2-2.22.5-6.el6.x86_64 glibc-2.12-1.80.el6_3.5.x86_64 is a duplicate with glibc-2.12-1.47.el6_2.12.x86_64 glibc-common-2.12-1.80.el6_3.5.x86_64 is a duplicate with glibc-common-2.12-1.47.el6_2.12.x86_64 gtk2-2.18.9-10.el6.x86_64 is a duplicate with gtk2-2.18.9-6.el6.centos.x86_64 kernel-firmware-2.6.32-320.4.1.lve1.1.4.el6.noarch is obsoleted by kernel-firmware-2.6.32-320.4.1.lve1.1.4.el6.noarch kernel-firmware-2.6.32-320.4.1.lve1.1.4.el6.noarch is obsoleted by kernel-firmware-2.6.32-379.5.1.lve1.1.9.6.1.el6.noarch kernel-firmware-2.6.32-379.5.1.lve1.1.9.6.1.el6.noarch is a duplicate with kernel-firmware-2.6.32-320.4.1.lve1.1.4.el6.noarch kernel-firmware-2.6.32-379.5.1.lve1.1.9.6.1.el6.noarch is obsoleted by kernel-firmware-2.6.32-320.4.1.lve1.1.4.el6.noarch kernel-firmware-2.6.32-379.5.1.lve1.1.9.6.1.el6.noarch is obsoleted by kernel-firmware-2.6.32-379.5.1.lve1.1.9.6.1.el6.noarch kernel-headers-2.6.32-379.5.1.lve1.1.9.6.1.el6.x86_64 is a duplicate with kernel-headers-2.6.32-320.4.1.lve1.1.4.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 is a duplicate with keyutils-libs-1.4-3.el6.x86_64 krb5-libs-1.9-33.el6_3.3.x86_64 is a duplicate with krb5-libs-1.9-22.el6_2.1.x86_64 libblkid-2.17.2-12.7.el6.x86_64 is a duplicate with libblkid-2.17.2-12.4.el6.x86_64 libcom_err-1.41.12-12.el6.x86_64 is a duplicate with libcom_err-1.41.12-11.el6.x86_64 libgcc-4.4.6-4.el6.x86_64 is a duplicate with libgcc-4.4.6-3.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 is a duplicate with libselinux-2.0.94-5.2.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 is a duplicate with libstdc++-4.4.6-3.el6.x86_64 libtiff-3.9.4-6.el6_3.x86_64 is a duplicate with libtiff-3.9.4-5.el6_2.x86_64 libudev-147-2.42.el6.x86_64 is a duplicate with libudev-147-2.40.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 is a duplicate with libuuid-2.17.2-12.4.el6.x86_64 libxml2-2.7.6-8.el6_3.3.x86_64 is a duplicate with libxml2-2.7.6-4.el6_2.4.x86_64 nspr-4.9.1-2.el6_3.x86_64 is a duplicate with nspr-4.8.9-3.el6_2.x86_64 nss-util-3.13.5-1.el6_3.x86_64 is a duplicate with nss-util-3.13.1-3.el6_2.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 is a duplicate with openssl-1.0.0-20.el6_2.5.x86_64 python-2.6.6-29.el6_3.3.x86_64 is a duplicate with python-2.6.6-29.el6.x86_64 python-libs-2.6.6-29.el6_3.3.x86_64 is a duplicate with python-libs-2.6.6-29.el6.x86_64 readline-6.0-4.el6.x86_64 is a duplicate with readline-6.0-3.el6.x86_64 sed-4.2.1-10.el6.x86_64 is a duplicate with sed-4.2.1-7.el6.x86_64 tzdata-2012c-3.el6.noarch is a duplicate with tzdata-2012c-1.el6.noarch xmlrpc-c-1.16.24-1209.1840.el6.x86_64 is a duplicate with xmlrpc-c-1.16.24-1200.1840.el6_1.4.x86_64 xmlrpc-c-client-1.16.24-1209.1840.el6.x86_64 is a duplicate with xmlrpc-c-client-1.16.24-1200.1840.el6_1.4.x86_64 Error: check all Tried: #rm /var/lib/rpm/__db* #rpm --rebuilddb #yum clean all Tried also running yum-complete-transaction still won't finish the update. ls -ld /var/tmp/ drwxrwxrwt. 20 root root 12288 Oct 3 18:44 /var/tmp/ df -h /var/tmp/ Filesystem Size Used Avail Use% Mounted on /tmp 3.9G 1.2G 2.6G 32% /var/tmp Latest errors: Error: Protected multilib versions: libgcc-4.4.6-4.el6.i686 != libgcc-4.4.6-3.el6.x86_64 Error: Protected multilib versions: glibc-2.12-1.80.el6_3.5.i686 != glibc-2.12-1.47.el6_2.12.x86_64 EDITED: yum repolist Loaded plugins: fastestmirror, rhnplugin, security Loading mirror speeds from cached hostfile * cloudlinux-x86_64-server-6: cl.banahosting.com repo id repo name status cloudlinux-x86_64-server-6 CloudLinux Server 6 x86_64 10,948+725 repolist: 10,948 [~]# package-cleanup --dupes Loaded plugins: fastestmirror, rhnplugin xmlrpc-c-client-1.16.24-1209.1840.el6.x86_64 xmlrpc-c-client-1.16.24-1200.1840.el6_1.4.x86_64 bind-libs-9.7.3-8.P3.el6_2.2.x86_64 bind-libs-9.8.2-0.10.rc1.el6_3.3.x86_64 libblkid-2.17.2-12.4.el6.x86_64 libblkid-2.17.2-12.7.el6.x86_64 libtiff-3.9.4-5.el6_2.x86_64 libtiff-3.9.4-6.el6_3.x86_64 audit-libs-2.1.3-3.el6.x86_64 audit-libs-2.2-2.el6.x86_64 libstdc++-4.4.6-3.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 sed-4.2.1-10.el6.x86_64 sed-4.2.1-7.el6.x86_64 python-libs-2.6.6-29.el6_3.3.x86_64 python-libs-2.6.6-29.el6.x86_64 coreutils-libs-8.4-16.el6.x86_64 coreutils-libs-8.4-19.el6.x86_64 libudev-147-2.40.el6.x86_64 libudev-147-2.42.el6.x86_64 chkconfig-1.3.49.3-2.el6.x86_64 chkconfig-1.3.49.3-1.el6_2.x86_64 keyutils-libs-1.4-4.el6.x86_64 keyutils-libs-1.4-3.el6.x86_64 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.80.el6_3.5.x86_64 tzdata-2012c-3.el6.noarch tzdata-2012c-1.el6.noarch coreutils-8.4-19.el6.x86_64 coreutils-8.4-16.el6.x86_64 dbus-libs-1.2.24-7.el6_3.x86_64 dbus-libs-1.2.24-5.el6_1.x86_64 libxml2-2.7.6-4.el6_2.4.x86_64 libxml2-2.7.6-8.el6_3.3.x86_64 abrt-libs-2.0.8-6.el6.x86_64 abrt-libs-2.0.4-14.el6.centos.x86_64 expat-2.0.1-9.1.el6.x86_64 expat-2.0.1-11.el6_2.x86_64 python-2.6.6-29.el6.x86_64 python-2.6.6-29.el6_3.3.x86_64 gtk2-2.18.9-6.el6.centos.x86_64 gtk2-2.18.9-10.el6.x86_64 libcom_err-1.41.12-12.el6.x86_64 libcom_err-1.41.12-11.el6.x86_64 gawk-3.1.7-10.el6.x86_64 gawk-3.1.7-9.el6.x86_64 readline-6.0-4.el6.x86_64 readline-6.0-3.el6.x86_64 glibc-common-2.12-1.80.el6_3.5.x86_64 glibc-common-2.12-1.47.el6_2.12.x86_64 libselinux-2.0.94-5.2.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 cups-libs-1.4.2-48.el6_3.1.x86_64 cups-libs-1.4.2-44.el6_2.3.x86_64 nspr-4.9.1-2.el6_3.x86_64 nspr-4.8.9-3.el6_2.x86_64 cagefs-safebin-3.5-1.el6.cloudlinux.x86_64 cagefs-safebin-3.6-6.el6.cloudlinux.x86_64 libuuid-2.17.2-12.4.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 xmlrpc-c-1.16.24-1209.1840.el6.x86_64 xmlrpc-c-1.16.24-1200.1840.el6_1.4.x86_64 openssl-1.0.0-20.el6_2.5.x86_64 openssl-1.0.0-25.el6_3.1.x86_64 dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 dhcp-common-4.1.1-31.P1.el6_3.1.x86_64 krb5-libs-1.9-33.el6_3.3.x86_64 krb5-libs-1.9-22.el6_2.1.x86_64 nss-util-3.13.5-1.el6_3.x86_64 nss-util-3.13.1-3.el6_2.x86_64 cloudlinux-release-6-6.2.2.x86_64 cloudlinux-release-6-6.3.0.x86_64 e2fsprogs-libs-1.41.12-11.el6.x86_64 e2fsprogs-libs-1.41.12-12.el6.x86_64 glib2-2.22.5-6.el6.x86_64 glib2-2.22.5-7.el6.x86_64 UPDATE 2 I removed all the dupes and then did update and got this: Updating : sudo-1.7.4p5-13.el6_3.x86_64 79/361 Error in POSTIN scriptlet in rpm package sudo-1.7.4p5-13.el6_3.x86_64 warning: /etc/sudoers created as /etc/sudoers.rpmnew error: error creating temporary file /var/tmp/rpm-tmp.hjTOqJ: Invalid argument error: Couldn't create temporary file for %post(sudo-1.7.4p5-13.el6_3.x86_64): Invalid argument Updating : pcre-7.8-6.el6.x86_64 80/361 Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 399, in callback self._instCloseFile( bytes, total, h ) File "/usr/lib/python2.6/site-packages/yum/rpmtrans.py", line 475, in _instCloseFile self.base.history.trans_data_pid_end(pid, state) File "/usr/lib/python2.6/site-packages/yum/history.py", line 858, in trans_data_pid_end """, ('TRUE', self._tid, pid, state)) File "/usr/lib/python2.6/site-packages/yum/sqlutils.py", line 168, in executeSQLQmark return cursor.execute(query, params) sqlite3.OperationalError: unable to open database file error: python callback <bound method RPMTransaction.callback of <yum.rpmtrans.RPMTransaction instance at 0x5c7cfc8>> failed, aborting! - [~]# lsattr /var/tmp/ -------------e- /var/tmp/cache_5b07945563e03aec1c44917886fd99a6 -------------e- /var/tmp/sess_6edfafda1a191f6986bd020ed945eea0 -------------e- /var/tmp/sess_1b837feecdd4c9e6aa6ecd81d41fda75 -------------e- /var/tmp/sess_70bec5f392b4f5f75ac444f5c82db2dc -------------e- /var/tmp/sess_24cd226ba0a370a6d3838a37745b2e15 -------------e- /var/tmp/nginx_proxy -------------e- /var/tmp/sess_19fb1dd060e42c9de8786ef34d7fcf6e -------------e- /var/tmp/sess_b4ac777076c5122a6e27d776de0a2fcb -------------e- /var/tmp/sess_5077441775ef8d07a2185e8fd48a4aa8 -------------e- /var/tmp/cache_4e71d930fe8250e222ae4d1dc39646ff -------------e- /var/tmp/sess_eb6eb29b38b55b85303c3137611f0a2faa15c21d -------------e- /var/tmp/sess_81e7e8d93b395f2c8d7e3fe12cc59e56 -------------e- /var/tmp/sess_05c7f305bdbf9a4c7af251d33ac59766 -------------e- /var/tmp/sess_0ad9369063a37b6b399688a835d69ed2 -------------e- /var/tmp/cache_c780deda617678faeea8f8a34395ac27 -------------e- /var/tmp/sess_9773332e3c99ee18dca0b05e8f02a41e -------------e- /var/tmp/sess_1d9b02b068ea81a3975599ddc12bcfb1 -------------e- /var/tmp/sess_1ffeff444123e924834dc5e80d07571e -------------e- /var/tmp/sess_aa56725471c84d9a06745c56dc499db7 -------------e- /var/tmp/sess_51e19964d7e1a164c63f4c72fa43475c33debbc0 -------------e- /var/tmp/sess_a83c7a05bb189a465b8813ff9e566aa8f9124079 -------------e- /var/tmp/sess_2f506ba5b77c61107871e8cf80393cdb -------------e- /var/tmp/sess_7bfe1578605b259ec5e4fd2200df4cd0 -------------e- /var/tmp/sess_f6e47011789d8d48d56dd78a398d98d5719414a7 -------------e- /var/tmp/sess_b7c43a90a8b8d8f02b0fffca77796ce5 -------------e- /var/tmp/sess_6c3e7103453ad4daba815bd96a903785 -------------e- /var/tmp/sess_86f32a22507d8410b3f0fc7d71a135d5 -------------e- /var/tmp/sess_aaf72d3e8cfb2f27ffdff61323f97e7553855a05 -------------e- /var/tmp/sess_5de4488e2ee03ac0f99ab9494573ccb1 -------------e- /var/tmp/sess_716d97bba4abdb38704a9e4212f6fddc -------------e- /var/tmp/sess_534908a9510a32eda13a5dc95ac022cc -------------e- /var/tmp/sess_626a58203d93427c79621ea4fec0906d -------------e- /var/tmp/sess_827ca92d10d3797f2c187c41764a7036 -------------e- /var/tmp/sess_6282962d77f7bead20e785fbdb9a3d8f -------------e- /var/tmp/cache_b012c8a729fc54a296a700ed92930a0e -------------e- /var/tmp/sess_631e5ba769773da056108d3fbd143963 -------------e- /var/tmp/cache_30bb7f1333ba5f96a229c91a3385d8b5 -------------e- /var/tmp/sess_93e085706b29c3e4e3593bfe39b1079e -------------e- /var/tmp/sess_abd78bd6c285d681c90de8c617747ab3 -------------e- /var/tmp/sess_e144544ed925569018e6607b05f43f253f75e2aa -------------e- /var/tmp/sess_5d3d036c772847a4508d3e100b173d84 -------------e- /var/tmp/sess_f35243d1f40bd8d9ce08940fafc00d93 -------------e- /var/tmp/sess_761c3ffa811b959638ed0b266741eaa4 -------------e- /var/tmp/mm.sem.sNdxjf -------------e- /var/tmp/sess_006d45dbd807291f7bffbd1db3707ed6 -------------e- /var/tmp/cache_2d0162aac9f87c1978ac644923a5e2fe -------------e- /var/tmp/sess_22c534418c380b72d105935b59713dd1 -------------e- /var/tmp/sess_94f72ef408567a15f6287c518e93898e -------------e- /var/tmp/cache_6fe03c83bb87489f3921db1c974dfc0e -------------e- /var/tmp/sess_48bbfa2a2a8793a62c7fd6a389a2763e -------------e- /var/tmp/mm.sem.ERERMV -------------e- /var/tmp/sess_20aba82c03a69b2dc6af66c499c38ee67e27368f -------------e- /var/tmp/sess_f94fe0589a79c934815ef359bcb0a16c7080d937 -------------e- /var/tmp/sess_460390801eb004593b4dee83779f414e -------------e- /var/tmp/spamd-52811-init -------------e- /var/tmp/cache_6427fdb235d59b0b2fbd105bf23d2e87 -------------e- /var/tmp/cache_4ce12d8350d7c0361dc1bf15d552a2d8 -------------e- /var/tmp/sess_039fec2a643340f118b6355e4c836ae8 -------------e- /var/tmp/sess_fa46fa80b26e6cf3d9c7de942d5dbcff -------------e- /var/tmp/cache_664858e614367812148716536e22d030 -------------e- /var/tmp/sess_4c8d4c44fbd828dc17415ce6aa213115 -------------e- /var/tmp/sess_d231a6c0e5dd4d7bacbf9de3d8bb298f -------------e- /var/tmp/sess_a82f8a088a8e37d375f6a9fede4a54d2 -------------e- /var/tmp/sess_604697227ae5359e5783dc9407845338 -------------e- /var/tmp/sess_5b4e623536640abe671b40563d03817d -------------e- /var/tmp/sess_2aba0aff64f3c18f22e0b79d591259e2 -------------e- /var/tmp/sess_bfd52a2d2d80880f8e26ad460739a0494f0d1e9e -------------e- /var/tmp/sess_ba9f3e3a7c7111930d6b801aaa833b46 -------------e- /var/tmp/sess_5cc8c5b620015a465359359a0805fbdd -------------e- /var/tmp/sess_84945c41d604b4653a1bf45d83a1917c -------------e- /var/tmp/sess_5f52569b27430780c07d25cfb8177e5c1ef647f0 -------------e- /var/tmp/sess_45896aef9e77f16be1b3e94b3edb2599 -------------e- /var/tmp/sess_5a67d0ef8f826a2f103b429c8464bdd5f75d6218 -------------e- /var/tmp/sess_1fce98bb32e5b34c79fd5a313de32980 -------------e- /var/tmp/sess_f7ea772ff3fbb1eb2ad8712dd2c49ed8 -------------e- /var/tmp/sess_a9dc16bc5c1eb2768bb2600f0d102fde -------------e- /var/tmp/mm.sem.3zwRTu -------------e- /var/tmp/sess_e2cad140703338a4b8c9254ec6b0a1a2 -------------e- /var/tmp/sess_e7c8e85daf9c5424aecb83e066decf31 -------------e- /var/tmp/sess_800f878fa944370f42e76057e7c033e19520bd41 -------------e- /var/tmp/sess_4fdae64eb18599521ace18679795568b -------------e- /var/tmp/sess_958fb886b97de2e767b059376c4724b5 -------------e- /var/tmp/sess_3c832a31f17744a8bb3c59dde02e561aefbc2e48 -------------e- /var/tmp/sess_6d9d7bf04f34e0d82b101f882196a905 -------------e- /var/tmp/sess_7231c75ae4fad2ca5fbcb6de430a7b13 -------------e- /var/tmp/sess_2eadffa2285def9673ce784395d272d8 -------------e- /var/tmp/cache_2ff353b664d8028df967f807ac18593a -------------e- /var/tmp/sess_4138a267f1f5e3ad93c1d64547c63134ae7c0db3 -------------e- /var/tmp/sess_64cd9fa0d6af8e8041aafffbe3db986a -------------e- /var/tmp/tmpg3ycIG -------------e- /var/tmp/cache_b633ac8283d6de8e39d81160d63fc8cd -------------e- /var/tmp/sess_2cee03cf5eafd3ef55d8efa1b0390436 -------------e- /var/tmp/sess_608066c609e28621f2a29ac04a3a6441 -------------e- /var/tmp/sess_46dfb35cf8266699ba9304e5d8c6869d -------------e- /var/tmp/sess_fb202a0ed54cee8832c5f6e0ca7fc1b3 -------------e- /var/tmp/sess_8fe3c5fd8cdda02855e5f9b5a1ea85a4 -------------e- /var/tmp/sess_941376d5cb51e0ba73f9a27ee259c159 -------------e- /var/tmp/sess_4fa17b1eac1d18341d20d0d8d4991ceb -------------e- /var/tmp/cache_de647c956ca6a1b75744ad194aceaa82 -------------e- /var/tmp/mm.sem.Ugu7Be -------------e- /var/tmp/sess_656e8a50759d5b36b963e7eb85e0bb0d -------------e- /var/tmp/sess_983f77b607bbffa1748d6c49557381e9 -------------e- /var/tmp/sess_632860d092e5e374da522ed2f88e83ce -------------e- /var/tmp/sess_030f900b81cc2a4ad095d53ef3ee0791 -------------e- /var/tmp/yum.log -------------e- /var/tmp/cache_810174993c6a2c0efe2edbe4c39a4a81 -------------e- /var/tmp/sess_29e2c781643434e81d189fc41f47fd34 -------------e- /var/tmp/tmpE12ahd -------------e- /var/tmp/sess_935da512fb077e04610266748b3b77f3 - cat /etc/fstab /tmp as: loop,rw,noexec,nosuid,nodev

    Read the article

  • high load average, high wait, dmesg raid error messages (debian nfs server)

    - by John Stumbles
    Debian 6 on HP proliant (2 CPU) with raid (2*1.5T RAID1 + 2*2T RAID1 joined RAID0 to make 3.5T) running mainly nfs & imapd (plus samba for windows share & local www for previewing web pages); with local ubuntu desktop client mounting $HOME, laptops accessing imap & odd files (e.g. videos) via nfs/smb; boxes connected 100baseT or wifi via home router/switch uname -a Linux prole 2.6.32-5-686 #1 SMP Wed Jan 11 12:29:30 UTC 2012 i686 GNU/Linux Setup has been working for months but prone to intermittently going very slow (user experience on desktop mounting $HOME from server, or laptop playing videos) and now consistently so bad I've had to delve into it to try to find what's wrong(!) Server seems OK at low load e.g. (laptop) client (with $HOME on local disk) connecting to server's imapd and nfs mounting RAID to access 1 file: top shows load ~ 0.1 or less, 0 wait but when (desktop) client mounts $HOME and starts user KDE session (all accessing server) then top shows e.g. top - 13:41:17 up 3:43, 3 users, load average: 9.29, 9.55, 8.27 Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.4%sy, 0.0%ni, 49.0%id, 49.7%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 903856k total, 851784k used, 52072k free, 171152k buffers Swap: 0k total, 0k used, 0k free, 476896k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3935 root 20 0 2456 1088 784 R 2 0.1 0:00.02 top 1 root 20 0 2028 680 584 S 0 0.1 0:01.14 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.12 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.16 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:00.42 events/0 10 root 20 0 0 0 0 S 0 0.0 0:02.26 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 netns 14 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 15 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.02 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.02 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.08 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.00 kseriod 28 root 20 0 0 0 0 S 0 0.0 0:04.19 kondemand/0 29 root 20 0 0 0 0 S 0 0.0 0:02.93 kondemand/1 30 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd 31 root 20 0 0 0 0 S 0 0.0 0:00.18 kswapd0 32 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd 33 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/0 34 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/1 35 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/0 36 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/1 203 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 204 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd 205 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/0 206 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/1 207 root 20 0 0 0 0 S 0 0.0 0:00.14 ata_aux 208 root 20 0 0 0 0 S 0 0.0 0:00.01 scsi_eh_0 dmesg suggests there's a disk problem: .............. (previous episode) [13276.966004] raid1:md0: read error corrected (8 sectors at 489900360 on sdc7) [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13279.569186] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13279.569211] ata4.00: irq_stat 0x40000008 [13279.569230] ata4.00: failed command: READ FPDMA QUEUED [13279.569257] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13279.569262] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13279.569306] ata4.00: status: { DRDY ERR } [13279.569321] ata4.00: error: { UNC } [13279.575362] ata4.00: configured for UDMA/133 [13279.575388] ata4: EH complete [13283.169224] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13283.169246] ata4.00: irq_stat 0x40000008 [13283.169263] ata4.00: failed command: READ FPDMA QUEUED [13283.169289] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13283.169294] res 41/40:00:07:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13283.169331] ata4.00: status: { DRDY ERR } [13283.169345] ata4.00: error: { UNC } [13283.176071] ata4.00: configured for UDMA/133 [13283.176104] ata4: EH complete [13286.224814] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13286.224837] ata4.00: irq_stat 0x40000008 [13286.224853] ata4.00: failed command: READ FPDMA QUEUED [13286.224879] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13286.224884] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13286.224922] ata4.00: status: { DRDY ERR } [13286.224935] ata4.00: error: { UNC } [13286.231277] ata4.00: configured for UDMA/133 [13286.231303] ata4: EH complete [13288.802623] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13288.802646] ata4.00: irq_stat 0x40000008 [13288.802662] ata4.00: failed command: READ FPDMA QUEUED [13288.802688] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13288.802693] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13288.802731] ata4.00: status: { DRDY ERR } [13288.802745] ata4.00: error: { UNC } [13288.808901] ata4.00: configured for UDMA/133 [13288.808927] ata4: EH complete [13291.380430] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13291.380453] ata4.00: irq_stat 0x40000008 [13291.380470] ata4.00: failed command: READ FPDMA QUEUED [13291.380496] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13291.380501] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13291.380577] ata4.00: status: { DRDY ERR } [13291.380594] ata4.00: error: { UNC } [13291.386517] ata4.00: configured for UDMA/133 [13291.386543] ata4: EH complete [13294.347147] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13294.347169] ata4.00: irq_stat 0x40000008 [13294.347186] ata4.00: failed command: READ FPDMA QUEUED [13294.347211] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13294.347217] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13294.347254] ata4.00: status: { DRDY ERR } [13294.347268] ata4.00: error: { UNC } [13294.353556] ata4.00: configured for UDMA/133 [13294.353583] sd 3:0:0:0: [sdc] Unhandled sense code [13294.353590] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [13294.353599] sd 3:0:0:0: [sdc] Sense Key : Medium Error [current] [descriptor] [13294.353610] Descriptor sense data with sense descriptors (in hex): [13294.353616] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [13294.353635] 23 05 6a 06 [13294.353644] sd 3:0:0:0: [sdc] Add. Sense: Unrecovered read error - auto reallocate failed [13294.353657] sd 3:0:0:0: [sdc] CDB: Read(10): 28 00 23 05 6a 00 00 00 08 00 [13294.353675] end_request: I/O error, dev sdc, sector 587557382 [13294.353726] ata4: EH complete [13294.366953] raid1:md0: read error corrected (8 sectors at 489900544 on sdc7) [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror and they're happening quite frequently, which I guess is liable to account for the performance problem(?) # dmesg | grep mirror [12433.561822] raid1: sdc7: redirecting sector 489900464 to another mirror [12449.428933] raid1: sdb7: redirecting sector 489900504 to another mirror [12464.807016] raid1: sdb7: redirecting sector 489900512 to another mirror [12480.196222] raid1: sdb7: redirecting sector 489900520 to another mirror [12495.585413] raid1: sdb7: redirecting sector 489900528 to another mirror [12510.974424] raid1: sdb7: redirecting sector 489900536 to another mirror [12526.374933] raid1: sdb7: redirecting sector 489900544 to another mirror [12542.619938] raid1: sdc7: redirecting sector 489900608 to another mirror [12559.431328] raid1: sdc7: redirecting sector 489900616 to another mirror [12576.553866] raid1: sdc7: redirecting sector 489900624 to another mirror [12592.065265] raid1: sdc7: redirecting sector 489900632 to another mirror [12607.621121] raid1: sdc7: redirecting sector 489900640 to another mirror [12623.165856] raid1: sdc7: redirecting sector 489900648 to another mirror [12638.699474] raid1: sdc7: redirecting sector 489900656 to another mirror [12655.610881] raid1: sdc7: redirecting sector 489900664 to another mirror [12672.255617] raid1: sdc7: redirecting sector 489900672 to another mirror [12672.288746] raid1: sdc7: redirecting sector 489900680 to another mirror [12672.332376] raid1: sdc7: redirecting sector 489900688 to another mirror [12672.362935] raid1: sdc7: redirecting sector 489900696 to another mirror [12674.201177] raid1: sdc7: redirecting sector 489900704 to another mirror [12698.045050] raid1: sdc7: redirecting sector 489900712 to another mirror [12698.089309] raid1: sdc7: redirecting sector 489900720 to another mirror [12698.111999] raid1: sdc7: redirecting sector 489900728 to another mirror [12698.134006] raid1: sdc7: redirecting sector 489900736 to another mirror [12719.034376] raid1: sdc7: redirecting sector 489900744 to another mirror [12734.545775] raid1: sdc7: redirecting sector 489900752 to another mirror [12734.590014] raid1: sdc7: redirecting sector 489900760 to another mirror [12734.624050] raid1: sdc7: redirecting sector 489900768 to another mirror [12734.647308] raid1: sdc7: redirecting sector 489900776 to another mirror [12734.664657] raid1: sdc7: redirecting sector 489900784 to another mirror [12734.710642] raid1: sdc7: redirecting sector 489900792 to another mirror [12734.721919] raid1: sdc7: redirecting sector 489900800 to another mirror [12734.744732] raid1: sdc7: redirecting sector 489900808 to another mirror [12734.779330] raid1: sdc7: redirecting sector 489900816 to another mirror [12782.604564] raid1: sdb7: redirecting sector 1242934216 to another mirror [12798.264153] raid1: sdc7: redirecting sector 1242935080 to another mirror [13245.832193] raid1: sdb7: redirecting sector 489898296 to another mirror [13261.376929] raid1: sdb7: redirecting sector 489898304 to another mirror [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror although the arrays are still running on all disks - they haven't given up on any yet: # cat /proc/mdstat Personalities : [raid1] [raid0] md10 : active raid0 md0[0] md1[1] 3368770048 blocks super 1.2 512k chunks md1 : active raid1 sde2[2] sdd2[1] 1464087824 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb7[0] sdc7[2] 1904684920 blocks super 1.2 [2/2] [UU] unused devices: <none> So I think I have some idea what the problem is but I am not a linux sysadmin expert by the remotest stretch of the imagination and would really appreciate some clue checking here with my diagnosis and what do I need to do: obviously I need to source another drive for sdc. (I'm guessing I could buy a larger drive if the price is right: I'm thinking that one day I'll need to grow the size of the array and that would be one less drive to replace with a larger one) then use mdadm to fail out the existing sdc, remove it and fit the new drive fdisk the new drive with the same size partition for the array as the old one had use mdadm to add the new drive into the array that sound OK?

    Read the article

  • How do I obtain a valid DNS resolution given just an IP address?

    - by Dee Newcum
    Is there a publicly-available DNS server somewhere that will respond to requests like: 74_125_225_50.anyip.com And will return 74.125.225.50 for the above request? That is, every single possible IP address can queried by name instead of number. http://ipq.co/ is close to what I'm looking for, but it requires you to first register an IP address before you can query its DNS name. I want a service that does a straightforward mapping from domain name to IP address. Why do I want to do this? I have a program that we use at work that requires a DNS lookup, but I need to be able to give it bare IP addresses. (long story... it's a server that I don't control, so I can't work around it using /etc/hosts)

    Read the article

  • mint linux, DVD drive keeps randomly being accessed. unsure how to find culprit

    - by juicebox
    I have a workstation with mint linux 12. It seems like the DVD drive on the machine keeps randomly "activating". By activating it makes noise, the light turns on, and it seems like it is checking if a disk is in it. At first I thought I was being hacked and someone/something was trying to check if I had media in the DVDRom drive. I ruled that out with netstat and rkhunter. I checked my logs and the only thing I can find that might help point out the problem are these repeated chunks in syslog: Mar 24 17:47:31 rich-MINT kernel: [ 9846.551422] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in Mar 24 17:47:31 rich-MINT kernel: [ 9846.551424] res 51/40:01:00:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error) Mar 24 17:47:31 rich-MINT kernel: [ 9846.551427] ata2.00: status: { DRDY ERR } Mar 24 17:47:31 rich-MINT kernel: [ 9846.551433] ata2.00: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9846.868012] ata2.01: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9847.344054] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:47:32 rich-MINT kernel: [ 9847.344067] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:47:32 rich-MINT kernel: [ 9847.376118] ata2.00: configured for PIO0 Mar 24 17:47:32 rich-MINT kernel: [ 9847.393047] ata2.01: configured for UDMA/133 Mar 24 17:47:32 rich-MINT kernel: [ 9847.397046] ata2: EH complete and again Mar 24 17:55:28 rich-MINT kernel: [10323.633268] sr 1:0:0:0: ioctl_internal_command return code = 8000002 Mar 24 17:55:28 rich-MINT kernel: [10323.633270] : Sense Key : Aborted Command [current] [descriptor] Mar 24 17:55:28 rich-MINT kernel: [10323.633275] : Add. Sense: No additional sense information Mar 24 17:55:11 rich-MINT kernel: [10306.640009] ata2.00: link is slow to respond, please be patient (ready=0) Mar 24 17:55:16 rich-MINT kernel: [10310.840009] ata2.00: SRST failed (errno=-16) Mar 24 17:55:16 rich-MINT kernel: [10310.840016] ata2.00: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.160013] ata2.01: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.636061] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:55:16 rich-MINT kernel: [10311.636075] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:55:16 rich-MINT kernel: [10311.668122] ata2.00: configured for PIO0 Mar 24 17:55:16 rich-MINT kernel: [10311.684854] ata2.01: configured for UDMA/133 Mar 24 17:55:17 rich-MINT kernel: [10312.105473] ata2: EH complete (Copied from Pastebin - http://pastebin.com/YNDrnyzH) If any linux masters could take a quick look at these log outputs and help me understand what is going on , much appreciated.

    Read the article

  • SSH service will not start on fresh Cygwin 1.7.15 install

    - by Coder6841
    OS: Windows 7 x64 Cygwin: 1.7.15-1 OpenSSH: 6.0p1-1 I'm attempting to install an SSH server on Windows 7. The tutorial that I'm following to do this is here: http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/ The issue is that upon executing the net start sshd command I get the following output:The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Here is the full output of the setup: AdminUser@ThisComputer ~ $ ssh-host-config *** Info: Generating /etc/ssh_host_key *** Info: Generating /etc/ssh_host_rsa_key *** Info: Generating /etc/ssh_host_dsa_key *** Info: Generating /etc/ssh_host_ecdsa_key *** Info: Creating default /etc/ssh_config file *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep. *** Query: Should privilege separation be used? (yes/no) yes *** Info: Note that creating a new user requires that the current account have *** Info: Administrator privileges. Should this script attempt to create a *** Query: new local account 'sshd'? (yes/no) yes *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server, *** Info: or later. On these systems, it's not possible to use the LocalSystem *** Info: account for services that can change the user id without an *** Info: explicit password (such as passwordless logins [e.g. public key *** Info: authentication] via sshd). *** Info: If you want to enable that functionality, it's required to create *** Info: a new account with special privileges (unless a similar account *** Info: already exists). This account is then used to run these special *** Info: servers. *** Info: Note that creating a new user requires that the current account *** Info: have Administrator privileges itself. *** Info: No privileged account could be found. *** Info: This script plans to use 'cyg_server'. *** Info: 'cyg_server' will only be used by registered services. *** Query: Do you want to use a different name? (yes/no) no *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes *** Info: Please enter a password for new user cyg_server. Please be sure *** Info: that this password matches the password rules given on your system. *** Info: Entering no password will exit the configuration. *** Query: Please enter the password: *** Query: Reenter: *** Info: User 'cyg_server' has been created with password '[CENSORED]'. *** Info: If you change the password, please remember also to change the *** Info: password for the installed services which use (or will soon use) *** Info: the 'cyg_server' account. *** Info: Also keep in mind that the user 'cyg_server' needs read permissions *** Info: on all users' relevant files for the services running as 'cyg_server'. *** Info: In particular, for the sshd server all users' .ssh/authorized_keys *** Info: files must have appropriate permissions to allow public key *** Info: authentication. (Re-)running ssh-user-config for each user will set *** Info: these permissions correctly. [Similar restrictions apply, for *** Info: instance, for .rhosts files if the rshd server is running, etc]. *** Info: The sshd service has been installed under the 'cyg_server' *** Info: account. To start the service now, call `net start sshd' or *** Info: `cygrunsrv -S sshd'. Otherwise, it will start automatically *** Info: after the next reboot. *** Info: Host configuration finished. Have fun! AdminUser@ThisComputer ~ $ net start sshd The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Note that on the line *** Query: Enter the value of CYGWIN for the daemon: [] I haven't entered anything. Tutorials often say to use ntsec or ntsec tty here but those options are removed from the latest version of OpenSSH. I've tried using them anyway and the result is the same. The file /var/log/sshd.log is empty. If I try just running the command /usr/sbin/sshd I get the output /var/empty must be owned by root and not group or world-writable.. The /var/empty directory has the following permissions: drwxr-xr-x+ 1 cyg_server root 0 May 29 15:28 empty. Google searches on this error did not turn up any working fixes. One person seems to have solved it by using the command chown SYSTEM /var/empty but that did not fix it in my case.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >