Search Results

Search found 4745 results on 190 pages for 'spare parts'.

Page 40/190 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • reverse proxy on PFsense, squid or otherwise

    - by Mustafa Ismail Mustafa
    I've been trying to get this to work for days now and its not working. After bashing my head against the desk enough times, I've decided to man up and ask. I'm desperately trying to set up a reverse proxy on the pfsense box itself. One because its a pretty powerful box and its not being utilized to the maximum at all and two because I don't have any spare machines to setup squid (or any other reverse proxy [capable]) server on. So, on pfsense, everytime I set up rules (on ServicesProxy ServerGeneral) as so: acl surveillance dstdomain surveillance.myweb.local; acl camera dstdomain camera.myweb.local; http_access allow surveillance AND camera (ad nauseum) when I check the services, squid stops and refuses to restart until I remove them pesky acls that are supposed to make my life easier! What am I doing wrong? How can I get it to work? Is there another way/package I can use? Thanks

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • SQL SERVER – Fastest Way to Restore the Database

    - by pinaldave
    A few days ago, I received following email: “Pinal, We are in an emergency situation. We have a large database of around 80+ GB and its backup is of 50+ GB in size. We need to restore this database ASAP and use it; however, restoring the database takes forever. Do you think a compressed backup would solve our problem? Any other ideas you got?” First of all, the asker has already answered his own question. Yes; I have seen that if you are using a compressed backup, it takes lesser time when you try to restore a database. I have previously blogged about the same subject. Here are the links to those blog posts: SQL SERVER – Data and Page Compressions – Data Storage and IO Improvement SQL SERVER – 2008 – Introduction to Row Compression SQL SERVER – 2008 – Introduction to New Feature of Backup Compression However, if your database is very large that it still takes a few minutes to restore the database even though you use any of the features listed above, then it will really take some time to restore the database. If there is urgency and there is no time you can spare for restoring the database, then you can use the wonderful tool developed by Idera called virtual database. This tool restores a certain database in just a few seconds so it will readily be available for usage. I have in depth written my experience with this tool in the article here SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual database. Let me know your experience in this scenario. Have you ever needed your database backup restored very quickly, what did you do in that scenario. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's your experience with female programmers?

    - by Rachel
    Let me start by saying I'm female, but every single other female programmer I've known has been pretty terrible. The extent of their knowledge seems to be copy/paste and modify some values. Quite often they don't even try to learn new concepts, or understand what they're doing. I'm not saying good female programmers aren't out there, just that the ratio of good/bad programmers seems much worse then males. Perhaps its because everyone feels they have to give female programmers a chance to prove they are not biased? Or is this just me?? What has your experiences been with them? UPDATE: Just want to say thanks for all the responses. I've learned some interesting things and am happy to know that female programmers have such support :) My experience has been very limited with them but all bad, and I agree that it is probably due to my small sample size (around 5). I wasn't trying to be sexist with such a question, I just wanted to find out if it was really that abnormal to be a female programmer. I'm abnormal about a lot of things you'd expect from a female... I play video games in most of my spare time, I liked Math so much I completed my entire math book during christmas break one year (What can I say, I found the subject interesting), I'm not very social, I dislike shopping, I only have 2 pairs of shoes, my significant other doesn't work but does all the housework/laundry/etc... but anyways, thanks :)

    Read the article

  • Advise on how to move from a .net developer role to a web developer role

    - by dermd
    I've been working primarily as a .net developer for the past 4 years for a financial services company. I've worked on .net 1.1, 2.0, 3.5 and have done the 3.5 enterprise app developer cert (not that that's worth a whole lot!). Before that I worked as a java developer with a bit of Flex thrown in for just over a year. My educational background is an Electronic and computer engineering degree, a higher diploma in systems analysis as well as one in web development (this was mainly java - JSP, Spring, etc) and a science masters in software design and development. I really feel like a change and would like to move to a different field to experience something different. I've done some courses in RoR and played around with it a bit in my spare time. Similarly I've done various web and mobile courses and done up some mobile webapps along with android and ios equivalents (haven't tried pushing them up to the app stores yet but may be worth tidying them up and doing that). I currently work long enough hours so find it hard to find time to work on too many side projects to get a decent portfolio together. But when I do work on the web stuff I do find it really enjoyable so think it's something I'd like to do full time. However, since my experience is pretty much all .net and financial services I find it very hard to get my foot in the door anywhere or get past a phone screen unless their specifically looking for someone with .net knowledge. What is the best way to move into a web development role without starting from scratch again. I do think a lot of the skills I have translate over but I seem to just get paired with .net jobs whenever I look around? Apart from js, jquery, html5, objective C are there any other technologies I should be looking into?

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox

    - by Asian Angel
    Do you need to make the most efficient use possible of vertical UI space on your system’s screen, but have horizontal space to spare? Now you can shift the toolbar icons and their awesome functionality to a slim sidebar in Firefox using the Vertical Toolbar extension. As you can see above the sidebar even picked up on our Personas Theme to help it blend in nicely with the rest of the browser. You can access the options for the new toolbar by right clicking within the toolbar area. These are the options for the toolbar…you can choose the side of Firefox that works best for toolbar placement, adjust display, hiding, & animation settings, define how the buttons display, and add/remove additional buttons as desired. Once you open the Customize Toolbar Window make any desired additions or removals just like you would before on the top UI section and close when finished. Note: Works with Firefox 4.0b7pre – 4.0.* Vertical Toolbar [Mozilla Add-ons] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • Linux software raid fails to include one device for one RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64. Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the mdadm.conf file: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions? Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1

    Read the article

  • Content Catalog for Oracle OpenWorld is Ready

    - by Rick Ramsey
    American Major League Baseball Umpire Jim Joyce made one of the worst calls in baseball history when he ruled Jason Donald safe at First in Wednesday's game between the Detroit Lions and the Cleveland Indians. The New York Times tells the story well. It was the 9th inning. There were two outs. And Detroit Tiger's pitcher Armando Galarraga had pitched a perfect game. Instead of becoming the 21st pitcher in Major League Baseball history to pitch a perfect game, Galarraga became the 10th pitcher in Major League Baseball history to ever lose a perfect game with two outs in the ninth inning. More insight from the New York Times here. You can avoid a similar mistake and its attendant death treats, hate mail, and self-loathing by studying the Content Catalog just released for Oracle Open World, Java One, and Oracle Develop conferences being held in San Francisco September 19-23. The Content Catalog displays all the available content related to the event, the venue, and the stream or track you're interested in. Additional filters are available to narrow down your results even more. It's simple to use and a big help. Give it a try. It'll spare you the fate of Jim Joyce. - Rick

    Read the article

  • CentOS 5.5 APIC issue on ESX 4.1 & ML115

    - by Adnan
    Hi, I've just installed vSphere 4.1 on an HP ProLiant ML115 G5 Quad-core and am trying to install CentOS 5.5 as a guest system. However, when the guest boots up I get a calibrate_APIC_clock warning and a kernel panic message. I've come across this knowledge base article on the vmware website which suggests moving the guest onto another Intel based host (!). Funnily enough I don't have a collection of spare host servers sitting around, so can anyone suggest another solution? Alternatively, would installing an earlier version of CentOS get around this issue, or would a yum update put me back to square one? How about BIOS settings, could anything be tweaked there? Thanks.

    Read the article

  • Innovative SPARC: Lighting a Fire Under Oracle's New Hardware Business

    - by Paulo Folgado
    "There's a certain level of things you can do with commercially available parts," says Oracle Executive Vice President Mike Splain. But, he notes, you can do so much more if you design the parts yourself. Mike Splain,EVP, OracleYou can, for example, design cryptographic accelerators into your microprocessors so customers can run their networks fully encrypted if they choose.Of course, it helps if you've already built multiple processing "cores" into those chips so they can handle all that encrypting and decrypting while still getting their other work done.System on a ChipAs the leader of Oracle Microelectronics, Mike knows how implementing clever innovations in silicon can give systems a real competitive advantage.The SPARC microprocessors that his team designed at Sun pioneered the concept of multiple cores several years ago, and the UltraSPARC T2 processor--the industry's first "system on a chip"--packs up to eight cores per chip, each running as many as eight threads at once. That's the most cores and threads of any general-purpose processor. Looking back, Mike points out that the real value of large enterprise-class servers was their ability to run a lot of very large applications in parallel."The beauty of our CMT [chip multi-threading] machines is you can get that same kind of parallel-processing capability at a much lower cost and in a much smaller footprint," he says.The Whole StackWhat has Mike excited these days is that suddenly the opportunity to innovate is much bigger as part of Oracle."In my group, we used to look up the software stack and say, 'We can do any innovation we want, provided the only thing we have to change is what's in the Solaris operating system'--or maybe Java," he says. "If we wanted to change things beyond that, we'd have to go outside the walls of Sun and we'd have to convince the vendors: 'You have to align with us, you have to test with us, you have to build for us, and then you'll reap the benefits.' Now we get access to the entire stack. We can look all the way through the stack and say, 'Okay, what would make the database go faster? What would make the middleware go faster?'"Changing the WorldMike and his microelectronics team also like the fact that Oracle is not just any software company. We're #1 in database, middleware, business intelligence, and more."We're like all the other engineers from Sun; we believe we can change the world, if we can just figure out how to get people to pay attention to us," he says. "Now there's a mechanism at Oracle--much more so than we ever had at Sun."He notes, too, that every innovation in SPARC has involved some combination of hardware and softwareoptimization."Take our cryptography framework, for example. Sure, we can accelerate rapidly, but the Solaris OS has to provide the right set of interfaces that applications can tap into," Mike says. "Same thing with our multicore architecture. We have to have software that can utilize all those threads and run in parallel." His engineers, he points out, have never been interested in producing chips that sell as mere components."Our chips are always designed to go into systems and be combined with various pieces of software," he says. "Our job is to enable the creation of systems."

    Read the article

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

  • Dedicate a NIC to a Virtualbox VM

    - by John Gardeniers
    On a machine with multiple NICs, running either Windows or Linux, is it possible to dedicate a NIC to a VM such that the host won't even try to use it for itself? I suspect it isn't even possible but if it is, which OS and version and just how would I set it up? The reason for this, apart from academic curiosity, is that I'm trying to set up a network lab for testing purposes. I currently have only a single spare machine, otherwise this wouldn't be an issue. One of the VMs will be the firewall for this lab network, so will need a dedicated NIC for the WAN interface. Neither ESXi nor Xen server will run on the machine, so I have to use a host OS.

    Read the article

  • Frederick .NET User Group June 2010 Meeting

    - by John Blumenauer
    FredNUG is pleased to announce our June speaker will be Pete Brown.  Pete was one FredNUG’s first speakers when the group started and we’re very happy to have him visiting us again to present on Silverlight!  On June 15th @ 6:30 PM, we’ll start with a Visual Studio 2010 Launch with pizza, swag and a presentation about what makes Visual Studio 2010 great.  Then, starting at 7 PM, Pete Brown will present “What’s New in Silverlight 4.”  It looks like a evening filled with newness!   The scheduled agenda is:   6:30 PM - 7:15 PM – Visual Studio 2010 Launch Event plus Pizza/Social Networking/Announcements 7:15 PM - 8:30 PM - Main Topic: What’s New in Silverlight 4 with Pete Brown  Main Topic:  What’s New in Silverlight 4? Speaker Bio: Pete Brown is a Senior Program Manager with Microsoft on the developer community team led by Scott Hanselman, as well as a former Microsoft Silverlight MVP, INETA speaker, and RIA Architect for Applied Information Sciences, where he worked for over 13 years. Pete's focus at Microsoft is the community around client application development (WPF, Silverlight, Windows Phone, Surface, Windows Forms, C++, Native Windows API and more). From his first sprite graphics and custom character sets on the Commodore 64 to 3d modeling and design through to Silverlight, Surface, XNA, and WPF, Pete has always had a deep interest in programming, design, and user experience. His involvement in Silverlight goes back to the Silverlight 1.1 alpha application that he co-wrote and put into production in July 2007. Pete has been programming for fun since 1984, and professionally since 1992. In his spare time, Pete enjoys programming, blogging, designing and building his own woodworking projects and raising his two children with his wife in the suburbs of Maryland. Pete's site and blog is at 10rem.net, and you can follow him on Twitter at http://twitter.com/pete_brown Twitter: http://twitter.com/pete_brown Facebook: http://www.facebook.com/pmbrown Pete is a founding member of the CapArea .NET Silverlight SIG. (Visit the CapArea. NET Silverlight SIG here )    8:30 PM - 8:45 PM – RAFFLE! Please join us and get involved in our .NET developers community!

    Read the article

  • WebLogic history an interview with Laurie Pitman by Qualogy

    - by JuergenKress
    All those years that I am working with WebLogic, the BEA and Oracle era are the most well known about WebLogic evolving into a worldwide Enterprise platform for Java applications, being used by multinationals around the globe. But how did it all begin? Besides from the spare info you find on some Internet pages, I was eager to hear it in person from one of the founders of WebLogic back in 1995, before the BEA era, Laurie Pitman. Four young people, Carl Resnikoff, Paul Ambrose, Bob Pasker, and Laurie Pitman, became friends and colleagues about the time of the first release of Java in 1995. Between the four of them, they had an MA in American history, an MA in piano, an MS in library systems, a BS in chemistry, and a BS in computer science. They had come together kind of serendipitously, interested in building some web tools exclusively in Java for the emerging Internet web application market. They found many things to like about each other, some overlap in our interests, but also a lot of well-placed differences which made a partnership particularly interesting. They made it formal in January 1996 by incorporating. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic history,Qualogy,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How should i setup my PC partition?

    - by acidzombie24
    I want to clear out my PC and setup the partitions. Right now i have it as XP, Win-7, Vista, XP/Test/Spare I notice my PC is pretty good at virtualization, at least virtualizing linux. I also rarely boot out of my primary XP although i do find myself deving on windows 7 once in a while. So i figure i can have it as XP, Windows 7, data partition then... what? i still have one more slot. There may be a more useful way to do this so what do you guys think? My bro has 2gb partition that is used to restore the OS which can be ran during the bootup process. However i dont think i can do that with mine. So, what are you thoughts?

    Read the article

  • How do I copy/clone a dynamic disk in Windows 7?

    - by PP
    I have some dynamic disks (or "partitions" but they are not really partitions) that I want to copy onto spare hard drives. I tried using gpartd (and fdisk for that matter) from a linux live disc. All it saw was hard drives with only one partition encasing the whole hard drive. So gpartd/fdisk is incapable of identifying the dynamic "partitions" and allowing me to copy them. Any tools that can be used to clone/copy a dynamic "partition"? (I'm open to commercial software suggestions if they can do the job).

    Read the article

  • Build NAS for Windows and Linux network

    - by modernzombie
    I have a spare PC and I would like to setup a NAS that is accessible from Windows and Linux. I would like to avoid using Windows as an OS and would like something like Ubuntu or FreeNAS. My only concern is I don't want to have to install special software on each client. Is there a way to use Ubuntu or FreeNAS and have Windows machines access the files with installing a client on each Windows box? UPDATE Thanks for the answers. I wish I could choose more than one. I will give FreeNAS a try and see how it goes. Thanks!

    Read the article

  • linksys Rvo16 redundant link config

    - by Adeodatus
    Hi All I have been given an RVO16 to play with. I'm multihomed and I'd like to set it up so that my primary, highest bandwidth link receives all traffic and the other connection is a hot spare basically. I want them both online but only the primary 1 used until it goes down then all traffic should automatically failover to the secondary link. Those of you that have played on an RVO16, can I do this and if so, how? I imagine I'd have it act as a router and pad the route on one so that the other is never used unless the primary is down. How? Thanks all.

    Read the article

  • Old HP computer powers off after seconds, hard drive still spins. Power supply? Motherboard?

    - by Chase
    Someone called me saying their computer keeps shutting off. I brought it home to troubleshoot. It's an HP Pavilion 513c. The computer powers on normally but almost immediately powers off. Sometimes it manages to make it to a fully loaded desktop and other times it powers off during the boot process. The hard drive still spins but there is no video and the CPU fans quit spinning. The power has to be unplugged before you can turn it back on again. I'm thinking it's either a motherboard or power supply issue. I don't have a spare of either component laying around to test with. Any guess on which part is more likely to be the culprit? Thanks.

    Read the article

  • Release Management as Orchestra

    - by ericajanine
    I read an excellent, concise article (http://www.buildmeister.com/articles/software_release_management_best_practices) on the basics of release management practices. In the article, it states "Release Management is often likened to the conductor of an orchestra, with the individual changes to be implemented the various instruments within it." I played in music ensembles for years, so this is especially close to my heart as example. I learned most of my discipline from hours and hours of practice at the hand of a very skilled conductor and leader. I also learned that the true magic in symphonic performance is one where everyone involved is focused on one sound, one goal. In turn, that solid focus creates a sound and experience bigger than just mechanics alone accomplish. In symphony, a conductor's true purpose is to make you, a performer, better so the overall sound and end product is better. The big picture (the performance of the composition) is the end-game, and all musicians in the orchestra know without question their part makes up an important but incomplete piece of that performance. A good conductor works with each section (e.g. group) to ensure their individual pieces are solid. Let's restate: The conductor leads and is responsible for ensuring those pieces are solid. While the performers themselves are doing the work, the conductor is the final authority on when the pieces are ready or not. If not, the conductor initiates the efforts to get them ready or makes the decision to scrap their parts altogether for the sake of an overall performance. Let it sink in, because it's clear--It is not the performer's call if they play their part as agreed, it's the conductor's final call to allow it. In comparison, if a software release manager is a conductor, the only way for that manager to be effective is to drive the overarching process and execution of individual pieces of a software development lifecycle. It does not mean the release manager performs each and every piece, it means the release manager has oversight and influence because the end-game is a successful software enhancin a useable environment. It means the release manager, not the developer or development manager, has the final call if something goes into a software release. Of course, this is not a process of autocracy or dictation of absolute rule, it's cooperative effort. But the release manager must have the final authority to make a decision if something is ready to be added to the bigger piece, the overall symphony of software changes being considered for package and release. It also goes without saying a release manager, like a conductor, must have full autonomy and isolation from other software groups. A conductor is the one on the podium waving a little stick at the each section and cueing them for their parts, not yelling from the back of the room while also playing a tuba and taking direction from the horn section. I have personally seen where release managers are relegated to being considered little more than coordinators, red-tapers to "satisfy" the demands of an audit group without being bothered to actually respect all that a release manager gives a group willing to employ them fully. In this dysfunctional scenario, development managers, project managers, business users, and other stakeholders have been given nearly full clearance to demand and push their agendas forward, causing a tail-wagging-the-dog scenario where an inherent conflict will ensue. Depending on the strength, determination for peace, and willingness to overlook a built-in expectation that is wrong, the release manager here must face the crafted conflict head-on and diffuse it as quickly as possible. Then, the release manager must clearly make a case why a change cannot be released without negative impact to all parties involved. If a political agenda is solely driving a software release, there IS no symphony, there is no "software lifecycle". It's just out-of-tune noise. More importantly, there is no real conductor. Sometimes, just wanting to make a beautiful sound is not enough. If you are a release manager, are you freed up enough to move, to conduct the sections of software creation to ensure a solid release performance is possible? If not, it's time to take stock in what your role actually is and see if that is what you truly want to achieve in your position. If you are, then you can successfully build your career and that of the people in your groups to create truly beautiful software (music) together.

    Read the article

  • Xenserver boot error

    - by Adrian
    I'm trying to get Xenserver 5.5 running on a spare computer here, hardware specs: Intel Q6600, 4GB Ram, and Gigabyte GA-P35-DS3R motherboard Xenserver itself installs fine onto a 150GB sata hdd, however it fails to boot whatsoever, giving this garbled mess: http://img697.imageshack.us/img697/9918/biosi.jpg it's not frozen because if I press enter it just prints a different garble and it also says "could not find kernel image". The strangest thing is if I put that hdd in my desktop and assign it to a VMWare desktop vm (under the ESX profile no less) it boots perfectly... leading me to believe there are no problems with the install or the hdd itself. From what I can tell the error seems to be occuring completely seperately to Xenserver, in the bootloader extlinux?. If there was a motherboard compatibility issue I would think it would also have manifested during installation, and the fact the problem seems to be with the booting into Xen makes me doubt this. Any ideas guys? (I'm using Xen because it can do PCI passthrough without VT-d.)

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >