Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 593/2165 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Stairway to XML: Level 3 - Working with Typed XML

    You can enforce the validation of an XML data type, variable or column by associating it with an XML Schema Collection. SQL Server validates a typed XML value against the rules defined in the schema collection so that INSERT or UPDATE operations will succeed only if the value being inserted or updated is valid as per the rules defined in the Schema Collection. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • How to handle lookup data in a C# ASP.Net MVC4 application?

    - by Jim
    I am writing an MVC4 application to track documents we have on file for our clients. I'm using code first, and have created models for my objects (Company, Document, etc...). I am now faced with the topic of document expiration. Business logic dictates certain documents will expire a set number of days past the document date. For example, Document A might expire in 180 days, Document 2 in 365 days, etc... I have a class for my documents as shown below (simplified for this example). What is the best way for me to create a lookup for expiration values? I want to specify documents of type DocumentA expire in 30 days, type DocumentB expire in 75 days, etc... I can think of a few ways to do this: Lookup table in the database I can query New property in my class (DaysValidFor) which has a custom getter that returns different values based on the DocumentType A method that takes in the document type and returns the number of days and I'm sure there are other ways I'm not even thinking of. My main concern is a) not violating any best practices and b) maintainability. Are there any pros/cons I need to be aware of for the above options, or is this a case of "just pick one and run with it"? One last thought, right now the number of days is a value that does not need to be stored anywhere on a per-document basis -- however, it is possible that business logic will change this (i.e., DocumentA's are 30 days expiration by default, but this DocumentA associated with Company XYZ will be 60 days because we like them). In that case, is a property in the Document class the best way to go, seeing as I need to add that field to the DB? namespace Models { // Types of documents to track public enum DocumentType { DocumentA, DocumentB, DocumentC // etc... } // Document model public class Document { public int DocumentID { get; set; } // Foreign key to companies public int CompanyID { get; set; } public DocumentType DocumentType { get; set; } // Helper to translate enum's value to an integer for DB storage [Column("DocumentType")] public int DocumentTypeInt { get { return (int)this.DocumentType; } set { this.DocumentType = (DocumentType)value; } } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:MM-dd-yyyy}", ApplyFormatInEditMode = true)] public DateTime DocumentDate { get; set; } // Navigation properties public virtual Company Company { get; set; } } }

    Read the article

  • Some Mac applications crash frequently, with "__THE_SYSTEM_HAS_NO_PORT_SETS_AVAILABLE__" in backtrace

    - by smokris
    Recently I'm finding that some Mac OS X applications are crashing frequently with backtraces like the following: Process: Mail [39226] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier: com.apple.mail Version: 4.4 (1082) Build Info: Mail-10820000~1 Code Type: X86-64 (Native) Parent Process: launchd [338] Date/Time: 2011-01-12 21:59:48.383 -0500 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 12 Dispatch queue: com.apple.root.default-priority [...] Thread 12 Crashed: Dispatch queue: com.apple.root.default-priority 0 com.apple.CoreFoundation 0x00007fff81653975 __THE_SYSTEM_HAS_NO_PORT_SETS_AVAILABLE__ + 5 1 com.apple.CoreFoundation 0x00007fff815a1f69 __CFRunLoopFindMode + 553 2 com.apple.CoreFoundation 0x00007fff815a1c5d __CFRunLoopCreate + 317 3 com.apple.CoreFoundation 0x00007fff815a1a78 _CFRunLoopGet0 + 744 4 com.apple.CoreFoundation 0x00007fff815dc90a CFRunLoopRunInMode + 58 5 com.apple.MessageFramework 0x00007fff81d01183 +[NSRunLoop(MessageExtensions) _flushQueuedEventsAddingSource:] + 120 6 com.apple.MessageFramework 0x00007fff81d010d2 +[NSRunLoop(MessageExtensions) flushQueuedEvents] + 36 7 com.apple.MessageFramework 0x00007fff81ce6515 -[_MFInvocationOperation main] + 275 8 com.apple.Foundation 0x00007fff83921de4 -[__NSOperationInternal start] + 681 9 com.apple.Foundation 0x00007fff83a00beb __doStart2 + 97 10 libSystem.B.dylib 0x00007fff801402c4 _dispatch_call_block_and_release + 15 11 libSystem.B.dylib 0x00007fff8011e831 _dispatch_worker_thread2 + 239 12 libSystem.B.dylib 0x00007fff8011e168 _pthread_wqthread + 353 13 libSystem.B.dylib 0x00007fff8011e005 start_wqthread + 13 [...] Any ideas about what might be causing this and/or how to remedy it?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • Asp.net tips and tricks

    - by ybbest
    Asp.net tips and tricks Here is a summary of articles I found very useful over the years while I am working on asp.net TRULY Understanding View state http://weblogs.asp.net/infinitiesloop/archive/2006/08/03/Truly-Understanding-Viewstate.aspx TRULY Understanding Dynamic Controls http://weblogs.asp.net/infinitiesloop/archive/2006/08/25/TRULY-Understanding-Dynamic-Controls-_2800_Part-1_2900_.aspx ASP.Net 2.0 – Master Pages: Tips, Tricks, and Traps http://odetocode.com/articles/450.aspx ASP.NET Tip – Use The Label Control Correctly http://haacked.com/archive/2007/02/15/asp.net_tip_-_use_the_label_control_correctly.aspx Asp.net httphandlers http://www.michaelflanakin.com/Articles/NET/NET1x/ImplementingHTTPHandlers/tabid/173/Default.aspx http://support.microsoft.com/default.aspx?scid=kb;EN-US;308001 http://msdn.microsoft.com/en-us/library/ms972974.aspx Asp.net ajax http://encosia.com/ ASP.NET 2.0 Tips, Tricks, Recipes and Gotchas http://weblogs.asp.net/scottgu/pages/ASP.NET-2.0-Tips_2C00_-Tricks_2C00_-Recipes-and-Gotchas.aspx Mastering Page-UserControl Communication http://www.codeproject.com/KB/user-controls/Page_UserControl.aspx Comparing Web Site Projects and Web Application Projects Web Deployment Projects .NET Radio Show http://www.dotnetrocks.com/ Herdingcode http://herdingcode.com/ Clean Code talk http://www.objectmentor.com/videos/video_index.html .NET Video Show http://www.dnrtv.com/ .Net User group http://chicagoalt.net/home http://exposureroom.com/members/RIAViewMirror.aspx/assets/ FAQ Why should you remove unnecessary C# using directives? http://stackoverflow.com/questions/136278/why-should-you-remove-unnecessary-c-using-directives http://stackoverflow.com/questions/2009471/what-is-the-benefit-of-removing-redundant-imports-in-vb-net-or-using-in-c-file http://codeclimber.net.nz/archive/2009/12/30/best-of-2009-the-5-most-popular-posts.aspx

    Read the article

  • How to round currency values [migrated]

    - by Kenny
    I already have several ways to solve this, but I am interested in whether there is a better solution to this problem. Please only respond with a pure numeric algorithm. String manipulation is not acceptable. I am looking for an elegant and efficient solution. Given a currency value (ie $251.03), split the value into two half and round to two decimal places. The key is that the first half should round up and the second should round down. So the outcome in this scenario should be $125.52 and $125.51.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane

    - by Christopher Karl Chan
    @page { margin: 0.79in } P { margin-bottom: 0.08in } --Focus So this blog entry will focus on BPM Swimlane roles and users from a ADF context. So we have an ADF Task Details Form and we are in the process of making it richer and dynamic in functionality. A common requirement could be to dynamically show different areas based on the user logged into the workspace. Perhaps even we want to know even what swim-lane role the user belongs to. It is is a little bit harder to achieve then one thinks unless you know the trick. The Challenge The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace. So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do? The Magic First add the BC4J Security library to your view project. Then Restart JDeveloper. Now find the web.xml file in the view project of your ADF Task Details Application and look for the JpsFilter section. Then add in the following section. <init-param> <param-name>application.name</param-name> <param-value>OracleBPMProcessRolesApp</param-value></init-param> This will link your application to that of the BPM workspace. Then in your dynamic part of your ADF form you can now check whether the user logged into the BPM Workspace belongs in a BPM swim-lane in any BPM process. The best way to do this is by using expression language in the JSF page itself. Here I am simply changing the rendered flag to either true or false and thereby hiding or showing a section. Perhaps you are re-using the same form for a task in an approver swim-lane and ordinary user swimlane. So we only want the approver to see this field. So call the built in function to check if the user is a member of the BPM swim-lane role. The name of the role must be of the syntax BPMProject.RoleName <af:outputText value="This will only be rendered when the user is part of the BPM Swimlane Role rendered="#{securityContext.userInRole['BPMProjectName.Rolename']}"/> Now you must redeploy your ADF Task Form project Now (in the image above) the text will ONLY get rendered in the Task Details Form only if the user logged into the workspace is a member of the swimlane Unsecure of the BPM project SimpleTask

    Read the article

  • Netinstalling CentOS if the gateway is in a different subnet

    - by James Lawrie
    I have a KVM host (A) running a virtual machine (B). They each have their own external IP address and the networking is setup using bridging between eth0 and br0 on A. B uses eth0, with A being the gateway. The problem is that the two external IP addresses are on different subnets (different /8s in fact) so by default, B claims it cannot reach A (Network Unreachable). I can resolve this by adding a static route on B: echo "any host gateway_ip dev eth0" > /etc/sysconfig/static-routes Modifying /etc/init.d/networking to reload the gateway after applying static routes (I only added the final line before fi): if [ -f /etc/sysconfig/static-routes ]; then grep "^any" /etc/sysconfig/static-routes | while read ignore args ; do /sbin/route add -$args done route add default gw "${GATEWAY}" fi If I then restart networking, it comes online. How can I do this (or work around it some other way) prior to the system being installed, ideally inside an Anaconda kickstart file?

    Read the article

  • adding noindex on pagination

    - by Damodar Bashyal
    I find few conflicts on people's reactions about adding noindex on paginations. What does pro webmasters has to say about this? I am planning to add noindex meta for all paginations with a hope to increase website value, so I would like some pro's feedback on this. e.g. here: http://w3tut.org/blog 3 posts' first few paragraphs are displayed and meta is taken from first post from that page, which will cause duplicate meta issue. Also, 3 posts in a page could be unrelated to each other as well. Is it a good idea to add noindex for these pages, so full article posts get more value?

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Chrome Time Track Is a Simple Task Time Tracker

    - by ETC
    If you don’t need advanced project tracking but still want to track how long it takes you to finish certain tasks or projects, Chrome Time Tracker is a free Chrome extension with a dead simple interface. Install the extension and a Time Track icon appears in your toolbar. Click it to create new tasks, delete old tasks, and start, stop, and reset your task timers. When the timer is running the icon changes to indicate you’re on the clock; pause the timer and it toggles back to the default icon. Hit up the link below to grab a copy. Chrome Time Track is free and works wherever Google Chrome does. Chrome Time Track [Google Chrome Extensions] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • How do I add and/or keep subtitles when converting video?

    - by JoeSteiger
    I have a mkv video I want to convert to mp4, but every which way I try and convert it (Handbrake, WinFF, ffmpeg, mencoder,...I lose the video's subtitles. How can I convert the video,keeping the subtitles, or add a subtitles.srt? I also would like 2 pass encoding with a video bitrate of 4054 and audio bitrate of 160. Thanks. I was asked for the ffmpeg -i: joe@joe-Leopard-Extreme:/media/Elements/Home Folder/Videos$ ffmpeg -i iron.mkv ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:52:09 with gcc 4.6.3 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. [matroska,webm @ 0x1a319a0] Estimating duration from bitrate, this may be inaccurate Input #0, matroska,webm, from 'iron.mkv': Metadata: title : Iron Duration: 02:06:01.67, start: 0.000000, bitrate: 1280 kb/s Chapter #0.0: start 0.000000, end 546.170622 Metadata: title : Chapter 00 Chapter #0.1: start 546.170622, end 1080.579489 Metadata: title : Chapter 01 Chapter #0.2: start 1080.579489, end 1609.941667 Metadata: title : Chapter 02 Chapter #0.3: start 1609.941667, end 2101.849733 Metadata: title : Chapter 03 Chapter #0.4: start 2101.849733, end 2595.259333 Metadata: title : Chapter 04 Chapter #0.5: start 2595.259333, end 3158.488667 Metadata: title : Chapter 05 Chapter #0.6: start 3158.488667, end 3564.644400 Metadata: title : Chapter 06 Chapter #0.7: start 3564.644400, end 4052.423356 Metadata: title : Chapter 07 Chapter #0.8: start 4052.423356, end 4304.300000 Metadata: title : Chapter 08 Chapter #0.9: start 4304.300000, end 4711.206489 Metadata: title : Chapter 09 Chapter #0.10: start 4711.206489, end 5080.575489 Metadata: title : Chapter 10 Chapter #0.11: start 5080.575489, end 5700.111067 Metadata: title : Chapter 11 Chapter #0.12: start 5700.111067, end 6269.346400 Metadata: title : Chapter 12 Chapter #0.13: start 6269.346400, end 6811.471333 Metadata: title : Chapter 13 Chapter #0.14: start 6811.471333, end 7561.679000 Metadata: title : Chapter 14 Stream #0.0(eng): Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc Stream #0.1(eng): Audio: ac3, 48000 Hz, 5.1, s16, 640 kb/s (default) Metadata: title : 3/2+1 Stream #0.2(ita): Audio: ac3, 48000 Hz, 5.1, s16, 640 kb/s Metadata: title : 3/2+1 Stream #0.3(eng): Subtitle: pgssub (default) Stream #0.4(eng): Subtitle: pgssub Stream #0.5(eng): Subtitle: pgssub Stream #0.6(eng): Subtitle: pgssub At least one output file must be specified joe@joe-Leopard-Extreme:/media/Elements/Home Folder/Videos

    Read the article

  • Linux to Solaris @ Morgan Stanley

    - by mgerdts
    I came across this blog entry and the accompanying presentation by Robert Milkoski about his experience switching from Linux to Oracle Solaris 11 for a distributed OpenAFS file serving environment at Morgan Stanley. If you are an IT manager, the presentation will show you: Running Solaris with a support contract can cost less than running Linux (even without a support contract) because of technical advantages of Solaris. IT departments can benefit from hiring computer scientists into Systems Programmer or similar roles.  Their computer science background should be nurtured so that they can continue to deliver value (savings and opportunity) to the business as technology advances. If you are a sysadmin, developer, or somewhere in between, the presentation will show you: A presentation that explains your technical analysis can be very influential. Learning and using the non-default options of an OS can make all the difference as to whether one OS is better suited than another.  For example, see the graphs on slides 3 - 5.  The ZFS default is to not use compression. When trying to convince those that hold the purse strings that your technical direction should be taken, the financial impact can be the part that closes the deal.  See slides 6, 9, and 10.  Sometimes reducing rack space requirements can be the biggest impact because it may stave off or completely eliminate the need for facilities growth. DTrace can be used to shine light on performance problems that may be suspected but not diagnosed.  It is quite likely that these problems have existed in OpenAFS for a decade or more.  DTrace made diagnosis possible. DTrace can be used to create performance analysis tools without modifying the source of software that is under analysis.  See slides 29 - 32. Microstate accounting, visible in the prstat output on slide 37 can be used to quickly draw focus to problem areas that affect CPU saturation.  Note that prstat without -m gives a time-decayed moving average that is not nearly as useful. Instruction level probes (slides 33 - 34) are a super-easy way to identify which part of a function is hot.

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini EDIT: I found more info on the topic here for those interested: http://serverfault.com/questions/34078/how-do-i-set-up-per-site-php-ini-files-on-a-lamp-server-using-namevirtualhosts

    Read the article

  • CreateRenderTarget returns 0x80070057 in big surface resolution

    - by senggen
    I have created the SLI merged desktop of three 1920x1680 monitors, so the desktop resolution is 5760x1080. There is a 0x80070057 error, while calling CreateRenderTarget to create the RT_Surface: IDirect3DSurface9* _render_surface; HRESULT hr = _device->CreateRenderTarget( _desktop_width * 2, _desktop_height + 1, D3DFMT_A8R8G8B8, D3DMULTISAMPLE_NONE, 0, TRUE, &_render_surface, NULL); It works OK with desktop resolution 1024x768, and the total resolution is 3072x768. In http://msdn.microsoft.com/en-us/library/windows/desktop/bb174361(v=vs.85).aspx, it says If the method succeeds, the return value is D3D_OK. If the method fails, the return value can be one of the following: D3DERR_NOTAVAILABLE, D3DERR_INVALIDCALL, D3DERR_OUTOFVIDEOMEMORY, E_OUTOFMEMORY. and no description about 0x80070057. HRESULT: 0x80070057 (2147942487) Name: E_INVALIDARG Description: An invalid parameter was passed to the returning function Somebody please help me.

    Read the article

  • WICD Network Manager does'nt work in 13.10

    - by Jayakumar J
    I am currently making use of a Wired Broadband, where in the default Network Manager though shows that the Auto Ethernet is connected, I was unable to browse through any websites in the browser. I had to disconnect & connect the Auto Ethernet several times in the Network Manager or restart Ubuntu to get it fixed. But following these steps is quiet annoying as this issue happens very frequently. I browsed through the internet and got to know that WICD Network Manager fixes this issue. Earlier I have been using Ubuntu 12.04 where the same issue was fixed by making use of WCID Network Manager. Recently I upgraded to 13.10 and as I got the same network issue in default Network Manager, I have installed WICD Network Manager but when I try to open WCID, I get an alert message saying, "Could not connect to wicd's D-Bus interface. Check the wicd log for error messages." When I click on OK, I get the same message again and when I click on OK again I get the following message, "Error connecting to wicd service via D-Bus.Please ensure the wicd service is running." Any help to fix this issue is greatly appreciated as this issue was annoying me since I installed 13.10

    Read the article

  • Executing Components in an Entity Component System

    - by John
    Ok so I am just starting to grasp the whole ECS paradigm right now and I need clarification on a few things. For the record, I am trying to develop a game using C++ and OpenGL and I'm relatively new to game programming. First of all, lets say I have an Entity class which may have several components such as a MeshRenderer,Collider etc. From what I have read, I understand that each "system" carries out a specific task such as calculating physics and rendering and may use more that one component if needed. So for example, I would have a MeshRendererSystem act on all entities with a MeshRenderer component. Looking at Unity, I see that each Gameobject has, by default, got components such as a renderer, camera, collider and rigidbody etc. From what I understand, an entity should start out as an empty "container" and should be filled with components to create a certain type of game object. So what I dont understand is how the "system" works in an entity component system. http://docs.unity3d.com/ScriptReference/GameObject.html So I have a GameObject(The Entity) class like class GameObject { public: GameObject(std::string objectName); ~GameObject(void); Component AddComponent(std::string name); Component AddComponent(Component componentType); }; So if I had a GameObject to model a warship and I wanted to add a MeshRenderer component, I would do the following: warship->AddComponent(new MeshRenderer()); In the MeshRenderers constructor, should I call on the MeshRendererSystem and "subscribe" the warship object to this system? In that case, the MeshRendererSystem should probably be a Singleton("shudder"). From looking at unity's GameObject, if each object potentially has a renderer or any of the components in the default GameObject class, then Unity would iterate over all objects available. To me, this seems kind of unnecessary since some objects might not need to be rendered for example. How, in practice, should these systems be implemented?

    Read the article

  • Mimic NTFS "Modify" Permissions on an ext3 acl enabled filesystem in linux?

    - by bobinabottle
    I am migrating our file share from Windows Server to Samba on Linux, and the only hurdle I have at the moment is the acl's. Currently we have a number of directories that use the "Modify" permission on NTFS, so users can write to a directory, but once the file is written it cannot be modified. On Linux, I had the idea that I would set an ACL for the directory to have read/write access, but have a default ACL associated with read only access. Is this possible? I'm not quite sure how to set a default ACL that differs from the parent directory. Thanks!

    Read the article

  • Add Windows 7 to boot menu

    - by Cumatru
    Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS - system restore /dev/sda2 13 4674 37436416 7 HPFS/NTFS - Windows 7 /dev/sda3 4674 58843 435116032 7 HPFS/NTFS - data storage /dev/sda4 58843 60802 15728640 83 Linux - Ubuntu 10.10 Initially i´ve installed StartUpManager. This ( i think ) added another 4 instances of Linux + memtest to my boot menu list. Altough, i din´t see any boot menu. It boots into Ubuntu after a few seconds. I´ve tried to add windows 7, but i did not succeed. This is a part of my menu.lst file. title Ubuntu 10.10, kernel 2.6.35-24-generic uuid 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/vmlinuz-2.6.35-24-generic root=UUID=1c9748e2-2f11-4a6c-91c0-7310d48c4a7a ro quiet splash initrd /boot/initrd.img-2.6.35-24-generic title Chainload into GRUB 2 root 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/grub/core.img title Ubuntu 10.10, memtest86+ uuid 1c9748e2-2f11-4a6c-91c0-7310d48c4a7a kernel /boot/memtest86+.bin menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And this is after a upgrade-grub Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.35-24-generic Found kernel: /boot/vmlinuz-2.6.35-22-generic Found GRUB 2: /boot/grub/core.img Found kernel: /boot/memtest86+.bin Updating /boot/grub/menu.lst ... done Later Edit: Ive added the following in 40_custom from /etc/grub.d/ and ive decommented hidden menu line from menu.lst, but i still cant see any boot menu. Ive also tried to press ESC and SHIFT. menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos0)' chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,0) chainloader +1 } menuentry "!Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,1) chainloader +1 } menuentry "!!Windows 7 (loader) (on /dev/sda1)" { set root= hd(0,2) chainloader +1 }

    Read the article

  • How to boot windows 8 in a dual boot along with windows 7? [migrated]

    - by GoldDove
    I have installed a WIndows 8 evaluation about a week ago. Usually, it asks me every time I turn on my computer whether to boot into windows 8 or windows 7. The default was windows 8 after 30 seconds. I changed that just yesterday to be default windows 7 after 5 seconds. And after I changed the setting, I went ahead and went into windows 8 and did my work. Today, when I turned on my computer, it is failing to ask me which one to boot it in. It simply boots directly into Windows 7. Is there any reason for this? Can I no longer boot into Windows 8?

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >