Search Results

Search found 28325 results on 1133 pages for 'test cases'.

Page 463/1133 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • NLP - Queries using semantic wildcards in full text searching, maybe with Lucene?

    - by Zsolt
    Let's say I have a big corpus (for example in english or an arbitrary language), and I want to perform some semantic search on it. For example I have the query: "Be careful: [art] armada of [sg] is coming to [do sg]!" And the corpus contains the following sentence: "Be careful: an armada of alien ships is coming to destroy our planet!" It can be seen that my query string could contain "semantic placeholders", such as: [art] - some placeholder for articles (for example a / an in English) [sg], [do sg] - some placeholders for NPs and VPs (subjects and predicates) I would like to develop a library which would be capable to handle these queries efficiently. I suspect that some kind of POS-tagging would be necessary for parsing the text, but because I don't want to fully reimplement an already existing full-text search engine to make it work, I'm considering that how could I integrate this behaviour into a search engine like Lucene? I know there are SpanQueries which could behave similarly in some cases, but as I can see, Lucene doesn't do any semantic stuff with stored texts. It is possible to implement a behavior like this? Or do I have to write an own search engine?

    Read the article

  • Ranking hit after site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed Google Webmaster Tools completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic comes from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings.

    Read the article

  • A list of pros and cons to giving developers “Local Admin” privileges to their machines? [closed]

    - by Boden
    Possible Duplicate: Is local “User” rights enough or do developers need Local Administrator or Power User while coding? I currently work for a large utilities company which currently does not grant “Local Admin” access to developers. This is causing a lot of grief as anything that requires elevated privileges needs to be done by the Desktop Support/Server Teams. In some cases this can take several days and requires our developers to have to show why they need this access. I personally think that all developers should have local administration rights and are currently fighting with management to achieve this but I would like to know what other people think about this. To achieve this I would like to hear what people believe are the pros and cons of letting developers have local admin access to their machines. Here are some I have come up with: Pros Loss time is keep low as developers can resolve issues that would normally require Local Admin Evaluation of tools and software are possible to improve productivity Desktop support time not wasted installing services and software on developers PC Cons Developers install software on local PC that could be malicious to others or inappropriate in a business environment Desktop Support required to support a PC that is not the norm Development done with admin access that then fails when promoted to another environment that does not have the same access level

    Read the article

  • C#/.NET &ndash; Finding an Item&rsquo;s Index in IEnumerable&lt;T&gt;

    - by James Michael Hare
    Sorry for the long blogging hiatus.  First it was, of course, the holidays hustle and bustle, then my brother and his wife gave birth to their son, so I’ve been away from my blogging for two weeks. Background: Finding an item’s index in List<T> is easy… Many times in our day to day programming activities, we want to find the index of an item in a collection.  Now, if we have a List<T> and we’re looking for the item itself this is trivial: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // can find the exact item using IndexOf() 5: var pos = list.IndexOf(64); This will return the position of the item if it’s found, or –1 if not.  It’s easy to see how this works for primitive types where equality is well defined.  For complex types, however, it will attempt to compare them using EqualityComparer<T>.Default which, in a nutshell, relies on the object’s Equals() method. So what if we want to search for a condition instead of equality?  That’s also easy in a List<T> with the FindIndex() method: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // finds index of first even number or -1 if not found. 5: var pos = list.FindIndex(i => i % 2 == 0);   Problem: Finding an item’s index in IEnumerable<T> is not so easy... This is all well and good for lists, but what if we want to do the same thing for IEnumerable<T>?  A collection of IEnumerable<T> has no indexing, so there’s no direct method to find an item’s index.  LINQ, as powerful as it is, gives us many tools to get us this information, but not in one step.  As with almost any problem involving collections, there are several ways to accomplish the same goal.  And once again as with almost any problem involving collections, the choice of the solution somewhat depends on the situation. So let’s look at a few possible alternatives.  I’m going to express each of these as extension methods for simplicity and consistency. Solution: The TakeWhile() and Count() combo One of the things you can do is to perform a TakeWhile() on the list as long as your find condition is not true, and then do a Count() of the items it took.  The only downside to this method is that if the item is not in the list, the index will be the full Count() of items, and not –1.  So if you don’t know the size of the list beforehand, this can be confusing. 1: // a collection of extra extension methods off IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item in the collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // note if item not found, result is length and not -1! 8: return list.TakeWhile(i => !finder(i)).Count(); 9: } 10: } Personally, I don’t like switching the paradigm of not found away from –1, so this is one of my least favorites.  Solution: Select with index Many people don’t realize that there is an alternative form of the LINQ Select() method that will provide you an index of the item being selected: 1: list.Select( (item,index) => do something here with the item and/or index... ) This can come in handy, but must be treated with care.  This is because the index provided is only as pertains to the result of previous operations (if any).  For example: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // you'd hope this would give you the indexes of the even numbers 5: // which would be 2, 3, 8, but in reality it gives you 0, 1, 2 6: list.Where(item => item % 2 == 0).Select((item,index) => index); The reason the example gives you the collection { 0, 1, 2 } is because the where clause passes over any items that are odd, and therefore only the even items are given to the select and only they are given indexes. Conversely, we can’t select the index and then test the item in a Where() clause, because then the Where() clause would be operating on the index and not the item! So, what we have to do is to select the item and index and put them together in an anonymous type.  It looks ugly, but it works: 1: // extensions defined on IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // finds an item in a collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // if you don't name the anonymous properties they are the variable names 8: return list.Select((item, index) => new { item, index }) 9: .Where(p => finder(p.item)) 10: .Select(p => p.index + 1) 11: .FirstOrDefault() - 1; 12: } 13: }     So let’s look at this, because i know it’s convoluted: First Select() joins the items and their indexes into an anonymous type. Where() filters that list to only the ones matching the predicate. Second Select() picks the index of the matches and adds 1 – this is to distinguish between not found and first item. FirstOrDefault() returns the first item found from the previous clauses or default (zero) if not found. Subtract one so that not found (zero) will be –1, and first item (one) will be zero. The bad thing is, this is ugly as hell and creates anonymous objects for each item tested until it finds the match.  This concerns me a bit but we’ll defer judgment until compare the relative performances below. Solution: Convert ToList() and use FindIndex() This solution is easy enough.  We know any IEnumerable<T> can be converted to List<T> using the LINQ extension method ToList(), so we can easily convert the collection to a list and then just use the FindIndex() method baked into List<T>. 1: // a collection of extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // find the index of an item in the collection similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: return list.ToList().FindIndex(finder); 8: } 9: } This solution is simplicity itself!  It is very concise and elegant and you need not worry about anyone misinterpreting what it’s trying to do (as opposed to the more convoluted LINQ methods above). But the main thing I’m concerned about here is the performance hit to allocate the List<T> in the ToList() call, but once again we’ll explore that in a second. Solution: Roll your own FindIndex() for IEnumerable<T> Of course, you can always roll your own FindIndex() method for IEnumerable<T>.  It would be a very simple for loop which scans for the item and counts as it goes.  There’s many ways to do this, but one such way might look like: 1: // extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item matching a predicate in the enumeration, much like List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: int index = 0; 8: foreach (var item in list) 9: { 10: if (finder(item)) 11: { 12: return index; 13: } 14:  15: index++; 16: } 17:  18: return -1; 19: } 20: } Well, it’s not quite simplicity, and those less familiar with LINQ may prefer it since it doesn’t include all of the lambdas and behind the scenes iterators that come with deferred execution.  But does having this long, blown out method really gain us much in performance? Comparison of Proposed Solutions So we’ve now seen four solutions, let’s analyze their collective performance.  I took each of the four methods described above and run them over 100,000 iterations of lists of size 10, 100, 1000, and 10000 and here’s the performance results.  Then I looked for targets at the begining of the list (best case), middle of the list (the average case) and not in the list (worst case as must scan all of the list). Each of the times below is the average time in milliseconds for one execution as computer over the 100,000 iterations: Searches Matching First Item (Best Case)   10 100 1000 10000 TakeWhile 0.0003 0.0003 0.0003 0.0003 Select 0.0005 0.0005 0.0005 0.0005 ToList 0.0002 0.0003 0.0013 0.0121 Manual 0.0001 0.0001 0.0001 0.0001   Searches Matching Middle Item (Average Case)   10 100 1000 10000 TakeWhile 0.0004 0.0020 0.0191 0.1889 Select 0.0008 0.0042 0.0387 0.3802 ToList 0.0002 0.0007 0.0057 0.0562 Manual 0.0002 0.0013 0.0129 0.1255   Searches Where Not Found (Worst Case)   10 100 1000 10000 TakeWhile 0.0006 0.0039 0.0381 0.3770 Select 0.0012 0.0081 0.0758 0.7583 ToList 0.0002 0.0012 0.0100 0.0996 Manual 0.0003 0.0026 0.0253 0.2514   Notice something interesting here, you’d think the “roll your own” loop would be the most efficient, but it only wins when the item is first (or very close to it) regardless of list size.  In almost all other cases though and in particular the average case and worst case, the ToList()/FindIndex() combo wins for performance, even though it is creating some temporary memory to hold the List<T>.  If you examine the algorithm, the reason why is most likely because once it’s in a ToList() form, internally FindIndex() scans the internal array which is much more efficient to iterate over.  Thus, it takes a one time performance hit (not including any GC impact) to create the List<T> but after that the performance is much better. Summary If you’re concerned about too many throw-away objects, you can always roll your own FindIndex() method, but for sheer simplicity and overall performance, using the ToList()/FindIndex() combo performs best on nearly all list sizes in the average and worst cases.    Technorati Tags: C#,.NET,Litte Wonders,BlackRabbitCoder,Software,LINQ,List

    Read the article

  • String.IsNullOrWhiteSpace

    - by Scott Dorman
    An empty string is different than an unassigned string variable (which is null), and is a string containing no characters between the quotes (""). The .NET Framework provides String.Empty to represent an empty string, and there is no practical difference between ("") and String.Empty. One of the most common string comparisons to perform is to determine if a string variable is equal to an empty string. The fastest and simplest way to determine if a string is empty is to test if the Length property is equal to 0. However, since strings are reference types it is possible for a string variable to be null, which would result in a runtime error when you tried to access the Length property. Since testing to determine if a string is empty is such a common occurrence, the .NET Framework provides the static method String.IsNullOrEmpty method: public static bool IsNullOrEmpty(string value) { if (value != null) { return (value.Length == 0); }   return true; } It is also very common to determine if a string is empty and contains more than just whitespace characters. For example, String.IsNullOrEmpty("   ") would return false, since this string is actually made up of three whitespace characters. In some cases, this may be acceptable, but in many others it is not. TO help simplify testing this scenario, the .NET Framework 4 introduces the String.IsNullOrWhiteSpace method: public static bool IsNullOrWhiteSpace(string value) { if (value != null) { for (int i = 0; i < value.Length; i++) { if (!char.IsWhiteSpace(value[i])) { return false; } } } return true; }   Using either String.IsNullOrEmpty or String.IsNullOrWhiteSpace helps ensure correctness, readability, and consistency, so they should be used in all situations where you need to determine if a string is null, empty, or contains only whitespace characters. Technorati Tags: .NET,C# 4

    Read the article

  • Is Tax Localization a good use for Workflow Foundation?

    - by JustinDoesWork
    Scenario: We have both Winforms and MVC code that is being used to work on a nation wide multi-user platform that does lots of logistics for lots of users. Tax rules change per state and even per city or county. These tax rules make a huge difference for our industry. The other issue is that rules can change based on legislation. The system will have to handle cases where before a date it works one way and then different after that date. This changeover will need to be entered into the system and tested before that date comes. Proposed Solution: Use Workflow Foundation to create a time based system where our users can change and add rules that change the way taxes are calculated. Question: I have not used Workflow Foundation and searching has returned books to look at but not a lot of examples of people using this technology successfully. Is my scenario a good use of Workflow Foundation?(I think so.) If you have any experience with Workflow Foundation, any tips on making this work well?

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • JDeveloper and ADF at UKOUG

    - by Grant Ronald
    This year, Oracle ADF and JDeveloper has a big showing at the UKOUG (about 22 hours worth!!)- Europe's largest Oracle User Group.  There are three days packed with awesome ADF content delivered by some of the leading lights in ADF Developement including Duncan Mills, Frank Nimphius, Shay Shmeltzer, Susan Duncan, Lucas Jellema, Steven Davelaar, Sten Vesterli (and I'll be there as well!). Please make sure you refer to the official agenda for timings but an outline is here (if you think there are any sessions I have missed let me know and I will add them) Monday 10:00 - 10:45 - Deepdive into logical and physical data modeling with JDeveloper 10:00 - 12:15 - Debugging ADF Applications 12:15 - 13:15 - Learn ADF Task Flows in 60 Minutes 14:30 - 15:15 - ADF's Hidden Gem - the Groovy scripting language in Oracle ADF 15:25 - 16:10 - ADF Patterns for Forms Conversions 16:35 - 17:35 - Dummies Guide to Oracle ADF 16:35 - 17:35 - ADF Security Overview - Strategies and Best Practices 17:45 - 18:30 - A Methodology for Enterprise Applications with Oracle ADF Tuesday 09:00 - 10:00 - Real World Performance Tuning for Oracle ADF 11:15 - 12:15 - Keynote: Modern Development, Mobility and Rich Internet Applications 11:15 - 12:15 - Migration to Fusion Middleware 11g: Real world cases of Forms, ADF and Identity Management upgrades 14:40 - 15:20 - What's new in JDeveloper 11gR2 14:40 - 15:20 - Development Tools Roundtable 15:35 - 16:20 - ALM in Jdeveloper is exciting! 16:40 - 17:40 - Moving Oracle Forms to Oracle ADF: Case Studies Wednesday 09:00 - 10:00 - Building a Multi-Tasking ADF Application with Dynamic Regions and Dynamic Tabs 10:10 - 10:55 - Building Highly Reusable ADF Taskflows 12:30 - 13:30 - Design Patterns, Customization and Extensibility of Fusion Applications 14:25 - 15:10 - Continuous Integration with Hudson: What a year! 14:00 - 17:00 - Wednesday Wizardry with Fusion Middleware - Live application development demonstration with ADF, SOA Suite 15:20 - 16:05 - Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way  16:15 - 17:00 - Leveraging ADF for Building Complex Custom Applications

    Read the article

  • resolv.conf doesn't get set on reboot when networking is configured for static ip

    - by kenneth koontz
    I'm experiencing, what appears to be a hostname resolution issue in ubuntu 12.04 server edition when configuring my computer to use a static ip. In /etc/network/interfaces: # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.28 netmask 255.255.255.0 gateway 192.168.1.1 Running $ sudo apt-get upgrade, results in a 'Failed to fetch...': . . . W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_US Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Some index files failed to download. They have been ignored, or old ones used instead. When I change my /etc/network/interfaces to: auto eth0 iface eth0 inet dhcp Everything works fine. Looking into /etc/resolv.conf provides some more hints...In cases where I was getting the resolving issue, resolve.conf was empty. No nameservers were specified. When I changed to dhcp from static and restarted networking. /etc/resolv.conf gets written to: 'nameserver 192.168.1.1'. Switching back from dhcp to static and restarting doesn't remove the nameserve entry. When I restart the system with static set, resolv.conf is empty. When I restart the system with dhcp set, resolv.conf has nameserver 192.168.1.1. So it appears that the issue is that resolve.conf is not getting written to correctly? Which package/code is responsible for writing to resolv.conf? Is there a particular package that I can take a look at open issues? UPDATE: istream posted a good article discussing changes to resolve.conf in 12.04. http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • How To Quickly Reboot Directly from Windows 7 to XP, Vista, or Ubuntu

    - by The Geek
    One of the biggest annoyances with a dual-boot system is having to wait for your PC to reboot to select the operating system you want to switch to, but there’s a simple piece of software that can make this process easier. This guest article was written by Ryan Dozier from the Doztech tech blog. With a small piece of software called iReboot we can skip the above step all together and instantly reboot into the operating system we want right from Windows. Their description says: “Instead of pressing restart, waiting for Windows to shut down, waiting for your BIOS to post, then selecting the operating system you want to boot into (within the bootloader time-limit!); you just select that entry from iReboot and let it do the rest!” Don’t worry about iReboot reconfiguring  your bootloader or any dual boot configuration you have. iReboot will only boot the selected operating system once and go back to your default settings. Using iReboot iReboot is quick and easy to install. Just download it, link below, run through the setup and select the default configuration. iReboot will automatically figure out what operating systems you have installed and appear in the taskbar. Go over to the taskbar and right click on the iReboot icon and select which operating system you want to reboot into. This method will add a check mark on the operating system you want to boot into. On your next reboot the system will automatically load your choice and skip the Windows Boot Manager. If you want to reboot automatically just select “Reboot on Selection” in the iReboot menu.   To be even more productive, you can install iReboot into each Windows operating system to quickly access the others with a few simple clicks.   iReboot does not work in Linux so you will have to reboot manually. Then wait for the Windows Boot Manager to load and select your operating system.   Conclusion iReboot works on  Windows XP, Windows Vista,  and Windows 7 as well as 64 bit versions of these operating systems. Unfortunately iReboot is only available for Windows but you can still use its functionality in Windows to quickly boot up your Linux machine. A simple reboot in Linux will take you back to Windows Boot Manager. Download iReboot from neosmart.net Editor’s note: We’ve not personally tested this software over at How-To Geek, but Neosmart, the author of the software, generally makes quality stuff. Still, you might want to test it out on a test machine first. If you’ve got any experience with this software, please be sure to let your fellow readers know in the comments. Similar Articles Productive Geek Tips Restart the Ubuntu Gnome User Interface QuicklyKeyboard Ninja: 21 Keyboard Shortcut ArticlesTest Your Computer’s Memory Using Windows Vista Memory Diagnostic ToolEnable or Disable UAC From the Windows 7 / Vista Command LineSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers

    - by pinaldave
    There are plenty of the things life one can make it simple. I really believe in the same. I was yesterday traveling for community related activity. On airport while returning I met a SQL Enthusiast. He asked me if there is any simple way to find maximum between two numbers in the SQL Server. I asked him back that what he really mean by Simple Way and requested him to demonstrate his code for finding maximum between two numbers. Here is his code: DECLARE @Value1 DECIMAL(5,2) = 9.22 DECLARE @Value2 DECIMAL(5,2) = 8.34 SELECT (0.5 * ((@Value1 + @Value2) + ABS(@Value1 - @Value2))) AS MaxColumn GO I thought his logic was accurate but the same script can be written another way. I quickly wrote following code for him and which worked just fine for him. Here is my code: DECLARE @Value1 DECIMAL(5,2) = 9.22 DECLARE @Value2 DECIMAL(5,2) = 8.34 SELECT CASE WHEN @Value1 > @Value2 THEN @Value1 ELSE @Value2 END AS MaxColumn GO He agreed that my code is much simpler but as per him there is some problem with my code which apparently he does not remember at this time. There are cases when his code will give accurate values and my code will not. I think his comment has value but both of us for the moment could not come up with any valid reason. Do you think any scenario where his code will work and my suggested code will not work? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I stop Gimp from autolaunching on startup?

    - by Joshua Fox
    Gimp launches every time I log into Xubuntu (v. 13.10). Gimp is not shown under Settings Manager- Sessions and startup. It does not appear in ~/.config/autostart. I immediately close Gimp in these cases, so it is not running when I shut down the session. How do I stop Gimp from autolaunching on startup? Diagnostic Info: Note that cd / find . -name gimp.desktop Only produces ./usr/share/applications/gimp.desktop and nothing else Here is the output of grep -lIr 'gimp' ~/ ~/Gimp-search-results.txt sbin/vgimportclone home/joshua/.gimp-2.8/controllerrc home/joshua/.gimp-2.8/tags.xml home/joshua/.gimp-2.8/dockrc home/joshua/.gimp-2.8/gimprc home/joshua/.gimp-2.8/themerc home/joshua/.gimp-2.8/templaterc home/joshua/.gimp-2.8/gtkrc home/joshua/.gimp-2.8/sessionrc home/joshua/.gimp-2.8/toolrc home/joshua/.gimp-2.8/pluginrc home/joshua/.gimp-2.8/menurc home/joshua/Gimp-search-results.txt home/joshua/.local/share/ristretto/mime.db home/joshua/.wine/drive_c/windows/system32/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.wine/drive_c/windows/syswow64/gecko/1.4/wine_gecko/dictionaries/en-US.dic home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,skype,,0495938f41334883bd3a67d3b164c1d1 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,gnome-utils,,91bba9b826fb21dbfc3aad6d3bd771cb home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,icedtea-plugin,,7bb5e4ad0469ef8277032c048b9d7328 home/joshua/.cache/software-center/piston-helper/reviews.ubuntu.com,reviews,api,1.0,review-stats,any,any,,1c66e24123164bb80c4253965e29eed7 home/joshua/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,wine1.4,,2bac05a75dcec604ee91e58027eb4165 home/joshua/.cache/software-center/piston-helper/software-center.ubuntu.com,api,2.0,applications,en,ubuntu,saucy,amd64,,32b432ef7e12661055c87e3ea0f3b5d5 home/joshua/.cache/software-center/apthistory.p home/joshua/.cache/software-center/reviews.ubuntu.com_reviews_api_1.0_review-stats-pkgnames.p home/joshua/.cache/oneconf/861c4e30b916e750f16fab5652ed5937/package_list_861c4e30b916e750f16fab5652ed5937 home/joshua/.cache/sessions/xfwm4-23e853443-fb4b-42fd-aa61-33fa99fdc12c.state home/joshua/.cache/sessions/xfce4-session-athena:0 home/joshua/.config/abiword/profile

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Advantages of using pure JavaScript over JQuery

    - by Shivan Dragon
    What are the advantages of using Javascript-only versus using JQuery-only? I have limited experience with JavaScript and JQuery coding. I've added bits and snippets of each to HTML pages but I've mostly coded server-side stuff in other languages. I've noticed that while you can theoretically do the same things using either of the two approaches (and of course you can even mix 'em up in the same project) there seems to be a tendency to always start using JQuery from the very beginning no-matter what the project demands are. So I'm simply wondering, are there any punctual benefits to not use JQuery-only but instead to just use plain old JavaScript? I know this looks like a non-question because it can be said about it that "there's no definite answer" or "it can be debated for ever", but I'm actually hoping for punctual answers such as "You can do this in one approach and you cannot do it with the other". ==EDIT== As per scrwtp's comment, I'm not referring just to the DOM Handling part. My question is rather: JQuery is a library. For Javascript. What I find strange about this library as opposed to other libraries for other languages is that in JQyery's case it seems to be designed to be able to use it exclusively and not need to touch Javascript directly. This is as opposed to let's say Hibernate and SQL, where even though the library (or rather framework in this case, but I think the analogy still applies) takes the handle on A LOT of aspects, you still get to use SQL when using it, at least for some fringe cases. However in JQuery & Javascript case, you could do anything you do with Javascript using only JQuery (or at least that's how it seems to me).

    Read the article

  • How to Disable the Animations on the Windows 8 Start Screen

    - by Usman
    Who doesn’t love animations? They make everything look so cool. But in some cases, animations are a distraction, and the same is true for Windows 8′s start screen (the “Modern UI”). Fortunately, there’s a very simple way to disable all those animations. Keep reading to find out how it’s done. The animations are especially noticeable when you switch from the good ol’ peaceful desktop to the start screen by pressing the winkey. I don’t know about you, but it feels like I’m getting dizzy by watching all those crazy animations over and over again. People have found out ways to enhance the start screen animations, add delay to various elements and stuff like that. But we’re going the other way, disabling the animations completely. To do so, log in, and when the start screen appears, type “Computer” (it will pop up in the search results before you’ve even finished typing). Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • Live Oracle AppAdvantage Webcast in APAC: Register Today

    - by Tanu Sood
    How Oracle Applications Customers can Extend the Value of their Investments How Oracle Applications Customers can Extend the Value of their Investments Oracle AppAdvantage is an exciting new initiative for Oracle enterprise application customers including E-Business Suite, PeopleSoft, JD Edwards, and Siebel. Oracle AppAdvantage provides strategies to help applications customers simplify, differentiate and innovate their investments through a pace layered architecture that can adjust with business requirements.Whether your organization is extending your applications to mobile devices, building a customer self-service portal, taking applications to the cloud, integrating applications with your other business critical applications or securely extending them to serve your specific needs, you can take the extension or customization work out of the applications and seamlessly extend with Oracle Fusion Middleware technologies as required. This webcast will discuss: Strategies to help applications customers simplify, differentiate and innovate their investments through a pace layered architecture How to get started and implementation use cases with customer examples Register today for this webcast on November 6. Can't wait until the Live Webcast? and ask him a question! If you are facing problems with registration or would like further information please email us at [email protected] -- For any questions on Oracle, our events and products please call or send us an email. Date Wednesday, 6th November 2013 Time Mumbai 10:30 a.m. (GMT +5:30) Singapore 1:00 p.m. (GMT +8:00) Sydney 4:00 p.m. (GMT +11:00)   -- The duration of this Webcast is 60 minutes. -- Contact Us | Legal Notices | Privacy Copyright © 2013, Oracle and/or its affiliates. All rights reserved.

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >