Search Results

Search found 3987 results on 160 pages for 'captain obvious'.

Page 142/160 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • Trouble with AABB collision response and physics

    - by WCM
    I have been racking my brain trying to figure out a problem I am having with physics and basic AABB collision response. I am fairly close as the physics are mostly right. Gravity feels good and movement is solid. The issue I am running into is that when I land on the test block in my project, I can jump off of it most of the time. If I repeatedly jump in place, I will eventually get stuck one or two pixels below the surface of the test block. If I try to jump, I can become free of the other block, but it will happen again a few jumps later. I feel like I am missing something really obvious with this. I have two functions that support the detection and function to return a vector for the overlap of the two rectangle bounding boxes. I have a single update method that is processing the physics and collision for the entity. I feel like I am missing something very simple, like an ordering of the physics vs. collision response handling. Any thoughts or help can be appreciated. I apologize for the format of the code, tis prototype code mostly. The collision detection function: public static bool Collides(Rectangle source, Rectangle target) { if (source.Right < target.Left || source.Bottom < target.Top || source.Left > target.Right || source.Top > target.Bottom) { return false; } return true; } The overlap function: public static Vector2 GetMinimumTranslation(Rectangle source, Rectangle target) { Vector2 mtd = new Vector2(); Vector2 amin = source.Min(); Vector2 amax = source.Max(); Vector2 bmin = target.Min(); Vector2 bmax = target.Max(); float left = (bmin.X - amax.X); float right = (bmax.X - amin.X); float top = (bmin.Y - amax.Y); float bottom = (bmax.Y - amin.Y); if (left > 0 || right < 0) return Vector2.Zero; if (top > 0 || bottom < 0) return Vector2.Zero; if (Math.Abs(left) < right) mtd.X = left; else mtd.X = right; if (Math.Abs(top) < bottom) mtd.Y = top; else mtd.Y = bottom; // 0 the axis with the largest mtd value. if (Math.Abs(mtd.X) < Math.Abs(mtd.Y)) mtd.Y = 0; else mtd.X = 0; return mtd; } The update routine (gravity = 0.001f, jumpHeight = 0.35f, moveAmount = 0.15f): public void Update(GameTime gameTime) { Acceleration.Y = gravity; Position += new Vector2((float)(movement * moveAmount * gameTime.ElapsedGameTime.TotalMilliseconds), (float)(Velocity.Y * gameTime.ElapsedGameTime.TotalMilliseconds)); Velocity.Y += Acceleration.Y; Vector2 previousPosition = new Vector2((int)Position.X, (int)Position.Y); KeyboardState keyboard = Keyboard.GetState(); movement = 0; if (keyboard.IsKeyDown(Keys.Left)) { movement -= 1; } if (keyboard.IsKeyDown(Keys.Right)) { movement += 1; } if (Position.Y + 16 > GameClass.Instance.GraphicsDevice.Viewport.Height) { Velocity.Y = 0; Position = new Vector2(Position.X, GameClass.Instance.GraphicsDevice.Viewport.Height - 16); IsOnSurface = true; } if (Collision.Collides(BoundingBox, GameClass.Instance.block.BoundingBox)) { Vector2 mtd = Collision.GetMinimumTranslation(BoundingBox, GameClass.Instance.block.BoundingBox); Position += mtd; Velocity.Y = 0; IsOnSurface = true; } if (keyboard.IsKeyDown(Keys.Space) && !previousKeyboard.IsKeyDown(Keys.Space)) { if (IsOnSurface) { Velocity.Y = -jumpHeight; IsOnSurface = false; } } previousKeyboard = keyboard; } This is also a full download to the project. https://www.box.com/s/3rkdtbso3xgfgc2asawy P.S. I know that I could do this with the XNA Platformer Starter Kit algo, but it has some deep flaws that I am going to try to live without. I'd rather go the route of collision response via an overlay function. Thanks for any and all insight!

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • How to Hashtag (Without Being #Annoying)

    - by Mike Stiles
    The right tool in the wrong hands can be a dangerous thing. Giving a chimpanzee a chain saw would not be a pretty picture. And putting Twitter hashtags in the hands of social marketers who were never really sure how to use them can be equally unattractive. Boiled down, hashtags are for search and organization of tweets. A notch up from that, they can also be used as part of a marketing strategy. In terms of search, if you’re in the organic apple business, you want anyone who searches “organic” on Twitter to see your posts about your apples. It’s keyword tactics not unlike web site keyword search tactics. So get a clear idea of what keywords are relevant for your tweet. It’s reasonable to include #organic in your tweet. Is it fatal if you don’t hashtag the word? It depends on the person searching. If they search “organic,” your tweet’s going to come up even if you didn’t put the hashtag in front of it. If the searcher enters “#organic,” your tweet needs the hashtag. Err on the side of caution and hashtag it so it comes up no matter how the searcher enters it. You’ll also want to hashtag it for the second big reason people hashtag, organization. You can follow a hashtag. So can the rest of the Twitterverse. If you’re that into organic munchies, you can set up a stream populated only with tweets hashtagged #organic. If you’ve established a hashtag for your brand, like #nobugsprayapples, you (and everyone else) can watch what people are tweeting about your company. So what kind of hashtags should you include? They should be directly related to the core message of your tweet. Ancillary or very loosely-related hashtags = annoying. Hashtagging your brand makes sense. Hashtagging your core area of interest makes sense. Creating a specific event or campaign hashtag you want others to include and spread makes sense (the burden is on you to promote it and get it going). Hashtagging nearly every word in the tweet is highly annoying. Far and away, the majority of hashtagged words in such tweets have no relevance, are not terms that would be searched, and are not terms needed for categorization. It looks desperate and spammy. Two is fine. One is better. And it is possible to tweet with --gasp-- no hashtags! Make your hashtags as short as you can. In fact, if your brand’s name really is #nobugsprayapples, you’re burning up valuable, limited characters and risking the inability of others to retweet with added comments. Also try to narrow your topic hashtag down. You’ll find a lot of relevant users with #organic, but a lot of totally uninterested users with #food. Just as you can join online forums and gain credibility and a reputation by contributing regularly to that forum, you can follow hashtagged topics and gain the same kind of credibility in your area of expertise. Don’t just parachute in for the occasional marketing message. And if you’re constantly retweeting one particular person, stop it. It’s kissing up and it’s obvious. Which brings us to the king of hashtag annoyances, “hashjacking.” This is when you see what terms are hot and include them in your marketing tweet as a hashtag, even though it’s unrelated to your content. Justify it all you want, but #justinbieber has nothing to do with your organic apples. Equally annoying, piggybacking on a popular event’s hashtag to tweet something not connected to the event. You’re only fostering ill will and mistrust toward your account from the people you’ve tricked into seeing your tweet. Lastly, don’t @ mention people just to make sure they see your tweet. If the tweet’s not for them or about them, it’s spammy. What I haven’t covered is use of the hashtag for comedy’s sake. You’ll see this a lot and is a matter of personal taste. No one will search these hashtagged terms or need to categorize then, they’re just there for self-expression and laughs. Twitter is, after all, supposed to be fun.  What are some of your biggest Twitter pet peeves? #blogsovernow

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • sudo in Debian squeeze inside linux-vserver always wants password

    - by mark
    Every since I upgraded all my linux-vserver Debian guests from Lenny to Squeeze I've the apparent problem that whenever I want to use sudo it asks me for my password. Every time. I've configured sudo to have a timeout of 30 minutes: Defaults timestamp_timeout=30 . This has been configured when it was still Lenny (note: as suggested by EightBitTony I've also tried without this setting - no change). I've a hard time figuring out what the problem here is, since I think my configuration is right. I thought about it being a problem with the file used to record the timestamp, maybe a permission issue, but was unlucky to find any hard evidence. I've compared the contents of /var/lib/sudo/ between a working and a non-working system but couldn't spot any difference. The version of sudo used in both environments is 1.7.4p4-2.squeeze.3. My non-working system(s): find /var/lib/sudo/ -ls 17319289 4 drwx------ 4 root root 4096 Jan 1 1985 /var/lib/sudo/ 17319286 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 17319312 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 17319361 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 17319490 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 17319326 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/4 17319491 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/2 A working system: find /var/lib/sudo -ls 2598921 4 drwx------ 5 root root 4096 Jan 1 1985 /var/lib/sudo 1999522 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 2000781 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/8 1998998 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/17 1999459 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/26 1998930 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/24 2000771 4 -rw------- 1 root mark 40 Jun 25 11:39 /var/lib/sudo/mark/4 2000773 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/5 1999223 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/0 1998908 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/14 2000769 4 -rw------- 1 root mark 40 Jul 9 13:30 /var/lib/sudo/mark/2 2000770 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/3 2000782 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 2000778 4 -rw------- 1 root mark 40 Jul 8 00:11 /var/lib/sudo/mark/7 1998892 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/19 1999264 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/23 2000789 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/12 1999093 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/25 1998880 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/18 1998853 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/20 2000790 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/15 1998878 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/16 1998874 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/13 2000774 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 2000786 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/11 1998893 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/22 2000783 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 1998949 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/1 Despite the obvious (some up2date timestamps on the working system) I don't see anything wrong here, so it could be as well be a wrong track. Here's my current /etc/sudoers: # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification User_Alias FULLADMIN = user1, user2, user3 # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL FULLADMIN ALL = (ALL) ALL # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Defaults always_set_home,timestamp_timeout=30

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Setting up apache to view https pages

    - by zac
    I am trying to set up a site using vmware workstation, ubuntu 11.10, and apache2. The site works fine but now the https pages are not showing up. For example if I try to go to https://www.mysite.com/checkout I just see the message Not Found The requested URL /checkout/ was not found on this server. I dont really know what I am doing and have tried a lot of things to get the ssl certificates in there right. A few things I have in there, in my httpd.conf I just have : ServerName localhost In my ports.conf I have : NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 http </IfModule> <IfModule mod_gnutls.c> Listen 443 http </IfModule> In the /etc/apache2/sites-available/default-ssl : <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> .... truncated in the sites-available/default I have : <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #DocumentRoot /home/magento/site/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <virtualhost *:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ServerAdmin webmaster@localhost <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </virtualhost> I also have in sites-availabe a file setup for my site url, www.mysite.com so in /etc/apache2/sites-available/mysite.com <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> <VirtualHost *:443> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> Thanks for any help getting this setup! As is probably obvious from this post I am pretty lost at this point.

    Read the article

  • Port forwarding DD-WRT

    - by Pawel
    Hi, I'am runing locally service on port 81 (192.168.1.101) I would like to access server from outside MY.WAN.IP.ADDR:81. Everything is working fine on my local network, However can't access it from outside. Below iptables rules on the router. I am using dd-wrt and asus rt-n16 (everything is setup through standard port range forwarding in dd-wrt ) It might be something obvious, but I don't have any experience with routing. Any help will be really appreciated. Thanks. #iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 1285 packets, 148K bytes) pkts bytes target prot opt in out source destination 3 252 DNAT icmp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR to:192.168.1.1 5 300 DNAT tcp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR tcp dpt:81 to:192.168.1.101 0 0 DNAT udp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR udp dpt:81 to:192.168.1.101 298 39375 TRIGGER 0 -- * * 0.0.0.0/0 MY.WAN.IP.ADDR TRIGGER type:dnat match:0 relate:0 Chain POSTROUTING (policy ACCEPT 7 packets, 433 bytes) pkts bytes target prot opt in out source destination 747 91318 SNAT 0 -- * vlan2 0.0.0.0/0 0.0.0.0/0 to:MY.WAN.IP.ADDR 0 0 RETURN 0 -- * br0 0.0.0.0/0 0.0.0.0/0 PKTTYPE = broadcast Chain OUTPUT (policy ACCEPT 86 packets, 5673 bytes) pkts bytes target prot opt in out source destination # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- anywhere anywhere tcp dpt:webcache DROP tcp -- anywhere anywhere tcp dpt:www DROP tcp -- anywhere anywhere tcp dpt:https DROP tcp -- anywhere anywhere tcp dpt:69 DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:telnet DROP tcp -- anywhere anywhere tcp dpt:telnet Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT 0 -- anywhere anywhere TCPMSS tcp -- anywhere anywhere tcp flags:SYN,RST/SYN TCPMSS clamp to PMTU lan2wan 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere state RELATED,ESTABLISHED logaccept tcp -- anywhere pawel-ubuntu tcp dpt:81 logaccept udp -- anywhere pawel-ubuntu udp dpt:81 TRIGGER 0 -- anywhere anywhere TRIGGER type:in match:0 relate:0 trigger_out 0 -- anywhere anywhere logaccept 0 -- anywhere anywhere state NEW Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain advgrp_1 (0 references) target prot opt source destination Chain advgrp_10 (0 references) target prot opt source destination Chain advgrp_2 (0 references) target prot opt source destination Chain advgrp_3 (0 references) target prot opt source destination Chain advgrp_4 (0 references) target prot opt source destination Chain advgrp_5 (0 references) target prot opt source destination Chain advgrp_6 (0 references) target prot opt source destination Chain advgrp_7 (0 references) target prot opt source destination Chain advgrp_8 (0 references) target prot opt source destination Chain advgrp_9 (0 references) target prot opt source destination Chain grp_1 (0 references) target prot opt source destination Chain grp_10 (0 references) target prot opt source destination Chain grp_2 (0 references) target prot opt source destination Chain grp_3 (0 references) target prot opt source destination Chain grp_4 (0 references) target prot opt source destination Chain grp_5 (0 references) target prot opt source destination Chain grp_6 (0 references) target prot opt source destination Chain grp_7 (0 references) target prot opt source destination Chain grp_8 (0 references) target prot opt source destination Chain grp_9 (0 references) target prot opt source destination Chain lan2wan (1 references) target prot opt source destination Chain logaccept (3 references) target prot opt source destination ACCEPT 0 -- anywhere anywhere Chain logdrop (0 references) target prot opt source destination DROP 0 -- anywhere anywhere Chain logreject (0 references) target prot opt source destination REJECT tcp -- anywhere anywhere tcp reject-with tcp-reset Chain trigger_out (1 references) target prot opt source destination #iptables -vnL FORWARD Chain FORWARD (policy ACCEPT 130 packets, 5327 bytes) pkts bytes target prot opt in out source destination 15 900 ACCEPT 0 -- br0 br0 0.0.0.0/0 0.0.0.0/0 390 20708 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 182K 130M lan2wan 0 -- * * 0.0.0.0/0 0.0.0.0/0 179K 129M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 logaccept tcp -- * * 0.0.0.0/0 192.168.1.101 tcp dpt:81 0 0 logaccept udp -- * * 0.0.0.0/0 192.168.1.101 udp dpt:81 0 0 TRIGGER 0 -- vlan2 br0 0.0.0.0/0 0.0.0.0/0 TRIGGER type:in match:0 relate:0 2612 768K trigger_out 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 2482 762K logaccept 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • Cannot join Win7 workstations to Win2k8 domain

    - by wfaulk
    I am trying to connect a Windows 7 Ultimate machine to a Windows 2k8 domain and it's not working. I get this error: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain "example.local": The query was for the SRV record for _ldap._tcp.dc._msdcs.example.local The following domain controllers were identified by the query: dc1.example.local dc2.example.local However no domain controllers could be contacted. Common causes of this error include: Host (A) or (AAAA) records that map the names of the domain controllers to their IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. The client is in an office connected remotely via MPLS to the data center where our domain controllers exist. I don't seem to have anything blocking connectivity to the DCs, but I don't have total control over the MPLS circuit, so it's possible that there's something blocking connectivity. I have tried multiple clients (Win7 Ultimate and WinXP SP3) in the one office and get the same symptoms on all of them. I have no trouble connecting to either of the domain controllers, though I have, admittedly, not tried every possible port. ICMP, LDAP, DNS, and SMB connections all work fine. Client DNS is pointing to the DCs, and "example.local" resolves to the two IP addresses of the DCs. I get this output from the NetLogon Test command line utility: C:\Windows\System32>nltest /dsgetdc:example.local Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN I have also created a separate network to emulate that office's configuration that's connected to the DC network via LAN-to-LAN VPN instead of MPLS. Joining Windows 7 computers from that remote network works fine. The only difference I can find between the two environments is the intermediate connectivity, but I'm out of ideas as to what to test or how to do it. What further steps should I take? (Note that this isn't actually my client workstation and I have no direct access to it; I'm forced to do remote hands access to it, which makes some of the obvious troubleshooting methods, like packet sniffing, more difficult. If I could just set up a system there that I could remote into, I would, but requests to that effect have gone unanswered.) 2011-08-25 update: I had DCDIAG.EXE run on a client attempting to join the domain: C:\Windows\System32>dcdiag /u:example\adminuser /p:********* /s:dc2.example.local Directory Server Diagnosis Performing initial setup: Ldap search capabality attribute search failed on server dc2.example.local, return value = 81 This sounds like it was able to connect via LDAP, but the thing that it was trying to do failed. But I don't quite follow what it was trying to do, much less how to reproduce it or resolve it. 2011-08-26 update: Using LDP.EXE to try and make an LDAP connection directly to the DCs results in these errors: ld = ldap_open("10.0.0.1", 389); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 389); Error <0x51: Fail to connect to 10.0.0.2. ld = ldap_open("10.0.0.1", 3268); Error <0x51: Fail to connect to 10.0.0.1. ld = ldap_open("10.0.0.2", 3268); Error <0x51: Fail to connect to 10.0.0.2. This would seem to point fingers at LDAP connections being blocked somewhere. (And 0x51 == 81, which was the error from DCDIAG.EXE from yesterday's update.) I could swear I tested this using TELNET.EXE weeks ago, but now I'm thinking that I may have assumed that its clearing of the screen was telling me that it was waiting and not that it had connected. I'm tracking down LDAP connectivity problems now. This update may become an answer.

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • Windows 7 .NET 3.5.1 - 2.0 Slightly Corrupted, How to Repair?

    - by Quinxy von Besiex
    My Windows 7 included .NET installation (3.5 to 2.0) appears very slightly and particularly corrupted and I am trying to fix it without reinstalling Windows or trying to revert to backups. Everything was working and then my hard drive started corrupting a few files and checkdisk found bad clusters so I imaged the drive to a new one. As soon as I booted on the new drive everything worked except programs which call the System.Net.NetworkInformation methods within .NET 3.5 to 2.0 (like Ping() and IsNetworkAvailable()), which immediately crash the app in which the calls are (those calls in .NET 4.0 works fine). Those methods are found inside System.dll, and I assume call native methods which I believe are inside winnsi.dll or iphlpapi.dll or something else (I've not found this yet); I assume it calls native methods because the exception which causes the crash is Fatal Execution Engine Error which people mention is usually related to calling native methods and marshaling data between them. A huge clue about the culprit is likely found in the fact that when I launch the exact same crashing application through a code profiler (which executes the exe and captures stats on which methods took the longest) the app works fine, no crash at all! How could running it within the profiler work and running it outside not work? That seems the key to the mystery. I've used procmon to catch all the registry, filesystem, and network events from the crashing execution and the profiler-run successful execution and compared the two outputs but didn't learn much (I see the moment at which the non-profiled app crashes, but up until then they behave the same, loaded the same modules, ). The only big difference seems to be that at the moment before the app crash the profiler-executed code creates 4-6 new threads and the directly executed code only creates 1-2. I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption). I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant. I have created a new user account and cleaned up any environment variables in case environment was related. No change. I did "sfc /scannow" and it found no integrity problems. I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed. I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult. I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc. Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it. Thanks! Quinxy

    Read the article

  • Dhcpd Daemon is trying to lease itself?

    - by tommieb75
    I have a Slackware Linux 13.0 box with two interfaces, eth0 and eth1. I have set this box up to be on the 192.168.1.0/24 network, with subnet mask of 255.255.255.0. I am trying to run a dhcpd server on this box to service two interfaces above, so I subnetted the 192.168.1.0/24 network into two subnets. For eth0 192.168.1.1, subnet mask 255.255.255.128, broadcast mask 192.168.1.127. For eth1 192.168.1.129, subnet mask 255.255.255.128, broadcast mask 192.168.1.255. Both the interfaces are assigned manually. eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.1 Bcast:192.168.1.127 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:39 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1404 (1.3 KiB) Interrupt:11 Base address:0x8000 Memory:faffc000-faffcfff eth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.128 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10003 errors:0 dropped:0 overruns:0 frame:0 TX packets:13286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1589229 (1.5 MiB) TX bytes:9900005 (9.4 MiB) Interrupt:11 Here is the dhcpd.conf set up authoritative; ddns-update-style interim; ignore client-updates; subnet 192.168.1.0 netmask 255.255.255.128 { range 192.168.1.2 192.168.1.126; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.127; option subnet-mask 255.255.255.128; } subnet 192.168.1.128 netmask 255.255.255.128 { range 192.168.1.129 192.168.1.254; default-lease-time 86400; max-lease-time 86400; option routers 192.168.1.1; option ip-forwarding off; option domain-name-servers 208.67.222.222, 208.67.220.220; option broadcast-address 192.168.1.255; option subnet-mask 255.255.255.128; } This is what is showing in the log Apr 10 18:09:58 inspiron8600 dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:09:58 inspiron8600 dhcpd: DHCPOFFER on 192.168.1.131 to 00:00:00:00:00:00 (inspiron8600) via eth1 Apr 10 18:10:01 inspiron8600 dhcpcd[3832]: eth1: adding IP address 169.254.153.6/16 This is happening spuriously, and the log gets filled up with nonsense..so my question is this: How do I stop this from happening? And why would it be trying to give itself a lease? I am sure I have missed something but cannot see it and would appreciate a pair of eyes from the community to spot the obvious flaw!

    Read the article

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • SSH-forwarded X11 display from Linux to Mac lost after some time

    - by mklein9
    I have a new and vexing problem with ssh forwarding my X11 connection when logging in from a Mac (10.7.2) to Linux (Ubuntu 8.04). I have no trouble using ssh -X to log in to the remote machine and starting an X11-based application from that shell. What has recently started happening is that additional invocations of X11 applications from that same shell, after a while (on the order of hours), are unable to start because the forwarded display is being blocked (I presume). When attempting to start xterm, for example, I get the usual message about a bad DISPLAY setting, such as: xterm Xt error: Can't open display: localhost:10.0 But the X11 application I started right when I logged in is still running along just fine, using that exact same display (localhost:10.0), just that it was started earlier. I turned on verbose logging in sshd_config and I see this in the /var/log/auth.log file in response to the failed xterm startup attempt: sshd[22104]: channel 8: open failed: administratively prohibited: open failed If I ssh -X to the server again, starting a new shell and getting assigned a new display (localhost:11.0), the same process repeats: the X11 applications started early on run just fine for as long as I keep them open (days), but after a few hours I cannot start any new ones from that shell. Particulars: OpenSSH sshd server running on Ubuntu 8.04, display forwarded to a Mac running Lion (10.7.2) with the default Apple X server. The systems are connected on an Ethernet LAN with a single switch between them. Neither machine is running a firewall. Until recently (a few days ago) this setup worked perfectly so I am mystified as to where to look next. I am by no means an X11 or SSH expert but have good UNIX/Linux experience. Nothing obvious has changed in either client or server configuration although I have tried changing a few options to try to debug this, like setting sshd_config's TCPKeepAlive to no, and setting "host +localhost" (you can tell I've been Googling). When logging in from a Linux 11.10 laptop to the same remote host over the same network and switch, this problem does not occur -- an xterm can be invoked successfully hours later from the same ssh login shell while the same experiment from the Mac fails (tested this morning to be sure), so it would appear to be a Mac-specific issue. With "LogLevel DEBUG3" set on the remote machine (sshd server), and no change made in the client connections by me, /var/log/auth.log shows one slight change in connection status reports overnight, which is the port number used by the one successful ssh session from the Linux machine (I think), connection #7 below: sshd[20173]: debug3: channel 7: status: The following connections are open:\r\n #0 server-session (t4 r0 i0/0 o0/0 fd 14/13 cfd -1)\r\n #3 X11 connection from 127.0.0.1 port 57564 (t4 r1 i0/0 o0/0 fd 16/16 cfd -1)\r\n #4 X11 connection from 127.0.0.1 port 57565 (t4 r2 i0/0 o0/0 fd 17/17 cfd -1)\r\n #5 X11 connection from 127.0.0.1 port 57566 (t4 r3 i0/0 o0/0 fd 18/18 cfd -1)\r\n #6 X11 connection from 127.0.0.1 port 57567 (t4 r4 i0/0 o0/0 fd 19/19 cfd -1)\r\n #7 X11 connection from 127.0.0.1 port 59007 In this report, everything is the same between status reports except the port number used by connection #7 which I believe is the Linux client -- the only one still maintaining a display connection. It continues to increment over time, judging by a sequence of these reports overnight. Thanks for any help, -Mike

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >