Search Results

Search found 10297 results on 412 pages for 'real tuty'.

Page 344/412 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Objective C - creating concrete class instances from base class depending upon type

    - by indiantroy
    Just to give a real world example, say the base class is Vehicle and concrete classes are TwoWheeler and FourWheeler. Now the type of the vehicle - TwoWheeler or FourWheeler, is decided by the base class Vehicle. When I create an instance of TwoWheeler/FourWheeler using alloc-init method, it calls the super implementation like below to set the value of common properties defined in the Vehicle class and out of these properties one of them is type that actually decides if the type is TwoWheeler or FourWheeler. if (self = [super initWithDictionary:dict]){ [self setOtherAttributes:dict]; return self; } Now when I get a collection of vehicles some of them could be TwoWheeler and others will be FourWheeler. Hence I cannot directly create an instance of TwoWheeler or FourWheeler like this Vehicle *v = [[TwoWheeler alloc] initWithDictionary:dict]; Is there any way I can create an instance of base class and once I know the type, create an instance of child class depending upon type and return it. With the current implementation, it would result in infinite loop because I call super implementation from concrete class. What would be the perfect design to handle this scenario when I don't know which concrete class should be instantiated beforehand?

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • Retrieving the first picture with a HTML parser

    - by justin01
    Hey guys, (Not a native english speaker) I'm doing a personal project in PHP in which I use the Simple HTML Parser to parse the HTML of a given URL and retrieve the first image in a DIV that have a specific ID or class (maincontent, content, main, wrapper, etc. - it's all in an array) and ignore ads. The goal is to take this image and make a thumbnail with it, pretty much like on Digg and others. I thought everything was working fine until I tried my script with the website Snopes ("http://www.snopes.com/photos/animals/luckycoyote.asp" <- this page more exactly). The source of the first image it gets is: " graphics/luckycoyote1.jpg ". So far, to correct this problem I created a little function that gets the domain name of the given URL and insert it before the IMG's source attribute. So for sites like Snopes.com, it gives me: "http://www.snopes.com/graphics/luckycoyote1.jpg" ... while the real URL for this image is "http://www.snopes.com*/photos/animals/graphics/luckycoyote1.jpg*" (or, more precisely: " http://graphics1.snopes.com/photos/animals/graphics/luckycoyote1.jpg " - note the subdomain here). So my main question is: how can I externally/dynamically retrieve the full URL address of an image ("absolute path") when I am only given the "relative path"? I'm pretty sure this is possible, since when I paste the link in Facebook's "What are you doing?" field for example, it gives me the correct path to the image while on the website, the source of the image is only (example) "image/photo/example.jpg". Thank you for your time.

    Read the article

  • What does Postgres do when BEGIN is run on a connection in autocommit mode?

    - by DNS
    I'm trying to better understand the concept of 'autocommit' when working with a Postgres (psycopg) connection. Let's say I have a fresh connection, set its isolation level to ISOLATION_LEVEL_AUTOCOMMIT, then run this SQL directly, without using the cursor begin/rollback methods (as an exercise; not saying I actually want to do this): INSERT A INSERT B BEGIN INSERT C INSERT D ROLLBACK What happens to INSERTs C & D? Is autocommit is purely an internal setting in psycopg that affects how it issues BEGINs? In that case, the above SQL is unafected; INSERTs A & B are committed as soon as they're done, while C & D are run in a transaction and rolled back. What isolation level is that transaction run under? Or is autocommit a real setting on the connection itself? In that case, how does it affect the handling of BEGIN? Is it ignored, or does it override the autocommit setting to actually start a transaction? What isolation level is that transaction run under? Or am I completely off-target?

    Read the article

  • Do Distributed Version Control Systems promote poor backup habits?

    - by John
    In a DVCS, each developer has an entire repository on their workstation, to which they can commit all their changes. Then they can merge their repo with someone else's, or clone it, or whatever (as I understand it, I'm not a DVCS user). To me that flags a side-effect, of being more vulnerable to forgetting to backup. In a traditional centralised system, both you as a developer and the people in charge know that if you commit something, it's held on a central server which can have decent backup solutions in place. But using a DVCS, it seems you only have to push your work to a server when you feel like sharing it. It's all very well you have the repo locally so you can work on your feature branch for a month without bothering anyone, but it means (I think) that checking in your code to the repo is not enough, you have to remember to do regular pushes to a backed-up server. It also means, doesn't it, that a team lead can't see all those nice SVN commit emails to keep a rough idea what's going on in the code-base? Is any of this a real issue?

    Read the article

  • Codeigniter xss_clean dilemma

    - by Henson
    I know this question has been asked over and over again, but I still haven't found the perfect answer for my liking, so here it goes again... I've been reading lots and lots polarizing comments about CI's xss_filter. Basically majority says that it's bad. Can someone elaborate how it's bad, or at least give 1 most probable scenario where it can be exploited? I've looked at the security class in CI 2.1 and I think it's pretty good as it doesn't allow malicious strings like document.cookie, document.write, etc. If the site has basically non-html presentation, is it safe to use global xss_filter (or if it's REALLY affecting performance that much, use it on per form post basis) before inserting to database ? I've been reading about pros and cons about whether to escape on input/output with majority says that we should escape on output only. But then again, why allow strings like <a href="javascript:stealCookie()">Click Me</a> to be saved in the database at all? The one thing I don't like is javascript: and such will be converted to [removed]. Can I extend the CI's security core $_never_allowed_str arrays so that the never allowed strings return empty rather than [removed]. The best reasonable wrongdoing example of this I've read is if a user has password of javascript:123 it will be cleaned into [removed]123 which means string like this document.write123 will also pass as the user's password. Then again, what is the odds of that to happen and even if it happens, I can't think of any real harm that can do to the site. Thanks

    Read the article

  • What guarantees are there on the run-time complexity (Big-O) of LINQ methods?

    - by tzaman
    I've recently started using LINQ quite a bit, and I haven't really seen any mention of run-time complexity for any of the LINQ methods. Obviously, there are many factors at play here, so let's restrict the discussion to the plain IEnumerable LINQ-to-Objects provider. Further, let's assume that any Func passed in as a selector / mutator / etc. is a cheap O(1) operation. It seems obvious that all the single-pass operations (Select, Where, Count, Take/Skip, Any/All, etc.) will be O(n), since they only need to walk the sequence once; although even this is subject to laziness. Things are murkier for the more complex operations; the set-like operators (Union, Distinct, Except, etc.) work using GetHashCode by default (afaik), so it seems reasonable to assume they're using a hash-table internally, making these operations O(n) as well, in general. What about the versions that use an IEqualityComparer? OrderBy would need a sort, so most likely we're looking at O(n log n). What if it's already sorted? How about if I say OrderBy().ThenBy() and provide the same key to both? I could see GroupBy (and Join) using either sorting, or hashing. Which is it? Contains would be O(n) on a List, but O(1) on a HashSet - does LINQ check the underlying container to see if it can speed things up? And the real question - so far, I've been taking it on faith that the operations are performant. However, can I bank on that? STL containers, for example, clearly specify the complexity of every operation. Are there any similar guarantees on LINQ performance in the .NET library specification?

    Read the article

  • Recomendations for Creating a Picture Slide Show with Super-Smooth Transitions (For Live Presentaito

    - by Nick
    Hi everyone, I'm doing a theatrical performance, and I need a program that can read images from a folder and display them full screen on one of the computer's VGA outputs, in a predetermined order. All it needs to do is start with the first image, and when a key is pressed (space bar, right arrow), smoothly cross-fade to the next image. Sounds just like power-point right? The only reason why I can use power-point/open-office is because the "fade smoothly" transition isn't smooth enough, or configurable enough. It tends to be fast and choppy, where I would like to see a perfectly smooth fade over, say, 30 seconds. So the question is what is the best (cheap and fast) way to accomplish this? Is there a program that already does this well (for cheap or free)? OR should I try to hack at open-office's transition code? Or would it be easier to create this from scratch? Are there frameworks that might make it easier? I have web programming experience (php), but not desktop or real-time rendering. Any suggestions are appreciated!

    Read the article

  • Multiple locking task (threading)

    - by Archeg
    I need to implement the class that should perform locking mechanism in our framework. We have several threads and they are numbered 0,1,2,3.... We have a static class called ResourceHandler, that should lock these threads on given objects. The requirement is that n Lock() invokes should be realeased by m Release() invokes, where n = [0..] and m = [0..]. So no matter how many locks was performed on single object, only one Release call is enough to unlock all. Even further if o object is not locked, Release call should perform nothing. Also we need to know what objects are locked on what threads. I have this implementation: public class ResourceHandler { private readonly Dictionary<int, List<object>> _locks = new Dictionary<int, List<object>>(); public static ResourceHandler Instance {/* Singleton */} public virtual void Lock(int threadNumber, object obj) { Monitor.Enter(obj); if (!_locks.ContainsKey(threadNumber)) {_locks.Add(new List<object>());} _locks[threadNumber].Add(obj); } public virtual void Release(int threadNumber, object obj) { // Check whether we have threadN in _lock and skip if not var count = _locks[threadNumber].Count(x => x == obj); _locks[threadNumber].RemoveAll(x => x == obj); for (int i=0; i<count; i++) { Monitor.Exit(obj); } } // ..... } Actually what I am worried here about is thread-safety. I'm actually not sure, is it thread-safe or not, and it's a real pain to fix that. Am I doing the task correctly and how can I ensure that this is thread-safe?

    Read the article

  • '<=' operator is not working in sql server 2000

    - by Lalit
    Hello, Scenario is, database is in the maintenance phase. this database is not developed by ours developer. it is an existing database developed by the 'xyz' company in sql server 2000. This is real time database, where i am working now. I wanted to write the stored procedure which will retrieve me the records From date1 to date 2.so query is : Select * from MyTableName Where colDate>= '3-May-2010' and colDate<= '5-Oct-2010' and colName='xyzName' whereas my understanding I must get data including upper bound date as well as lower bound date. but somehow I am getting records from '3-May-2010' (which is fine but) to '10-Oct-2010' As i observe in table design , for ColDate, developer had used 'varchar' to store the date. i know this is wrong remedy by them. so in my stored procedure I have also used varchar parameters as @FromDate1 and @ToDate to get inputs for SP. this is giving me result which i have explained. i tried to take the parameter type as 'Datetime' but it is showing error while saving/altering the stored procedure that "@FromDate1 has invalid datatype", same for "@ToDate". situation is that, I can not change the table design at all. what i have to do here ? i know we can use user defined table in sql server 2008 , but there is version sql server 2000. which does not support the same. Please guide me for this scenario. **Edited** I am trying to write like this SP: CREATE PROCEDURE USP_Data (@Location varchar(100), @FromDate DATETIME, @ToDate DATETIME) AS SELECT * FROM dbo.TableName Where CAST(Dt AS DATETIME) >=@fromDate and CAST(Dt AS DATETIME)<=@ToDate and Location=@Location GO but getting Error: Arithmetic overflow error converting expression to data type datetime. in sql server 2000 What should be that ? is i am wrong some where ? also (202 row(s) affected) is changes every time in circular manner means first time sayin (122 row(s) affected) run again saying (80 row(s) affected) if again (202 row(s) affected) if again (122 row(s) affected) I can not understand what is going on ?

    Read the article

  • Find existence of number in a sorted list in constant time? (Interview question)

    - by Rich
    I'm studying for upcoming interviews and have encountered this question several times (written verbatim) Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M N and N large enough to span multiple disks. Algorithm to beat O(log n); bonus points for constant time algorithm. First of all, I'm not sure if this is a question with a real solution. My colleagues and I have mused over this problem for weeks and it seems ill formed (of course, just because we can't think of a solution doesn't mean there isn't one). A few questions I would have asked the interviewer are: Are there repeats in the sorted list? What's the relationship to the number of disks and N? One approach I considered was to binary search the min/max of each disk to determine the disk that should hold that number, if it exists, then binary search on the disk itself. Of course this is only an order of magnitude speedup if the number of disks is large and you also have a sorted list of disks. I think this would yield some sort of O(log log n) time. As for the M N hint, perhaps if you know how many numbers are on a disk and what the range is, you could use the pigeonhole principle to rule out some cases some of the time, but I can't figure out an order of magnitude improvement. Also, "bonus points for constant time algorithm" makes me a bit suspicious. Any thoughts, solutions, or relevant history of this problem?

    Read the article

  • jQuery variable and object caching

    - by niksy
    This is something that has been bugging me some time and every time I found myself using different solution for this. So, I have link in my document which on click creates new element with some ID. <a href="#" id="test-link">Link</a> For the purpose of easier reusing, I would like to store that new elements ID in a variable which is jQuery object var test = $('#test'); On click I append that new element on body, new element is DIV $('body').append('<div id="test"/>'); And here goes the main "problem" - if I test this new elements length with test.length it first returns 0 and later 1. But, when I test it with $('#test').length it returns 1 from the start. I suppose it is some caching mechanism and I was wondering is there better, all-around solution which will allow to store elements in variables in the start for later repurpose and in the same time work with dynamically created elements. Live, delegate, something else? What I do sometimes is create string and add it to jQuery object but I think this is just avoiding the real issue. Also, using .find() inside another jQuery object. Thanks in advance.

    Read the article

  • Weird CSS behavior... removing a 1px border makes <DIV> move about 20px

    - by John
    I have the following: CSS #pageBody { height: 500px; padding:0; margin:0; /*border: 1px solid #00ff00;*/ } #pageContent { height:460px; margin-left:35px; margin-right:35px; margin-top:30px; margin-bottom:30px; padding:0px 0 0 0; } HTML <div id="pageBody"> <div id="pageContent"> <p> blah blah blah </p> </div> </div> </div> If I uncomment the border line in pageBody, everything fits sweetly... I had the border on to verify things were as expected. But removing the border, pageBody drops down the page about 20px, while pageContent does not appear to move at all. Now this is not the real page, but a subset. If nothing's obvious I can attempt to generate a working minimal sample, but I thought there might be an easy quick answer first. I see the same exact problem in Chrome and IE8, suggesting it's me not the browser. Any tips where to look? I wondered maybe the 1px border was some tipping point making the contents of a div just too big, but changing #pageContent height to e.g 400 makes no difference, other than clipping the bottom off that div.

    Read the article

  • Infrastructure for high transactional system (language & hosting suggestion help)

    - by RPS
    Some of our friends (University students) are trying to develop a twitter type application, I want to plan for at least 1000 transactions per second (I know it's wishful thinking) for initial launch. This involves several people connecting and getting updates and posting (text + images) to site. In the back end db will server the data and also calculates rankings of what to push to user based on complex algorithm on the fly real-time. Our group is familiar with Java and Tomcat/MySQL. We can also easily learn/code in PHP/MySQL. What is the best suited platform for our purpose ? Though Java seem to be easy to implement for us I am afraid that hosting will be a bit difficult. I could find cloud based php hosting services (like rackspace cloudsites) at reasonable cost. Amazon EC2 is a bit over our heads to manage on day-to-day. Also any recommendation on hosting ? (PHP or Java) We don't have millions in seed money but about $20K to start with. Any advice on above or any thing in general approach is much appreciated.

    Read the article

  • Is Maven really flexible?

    - by Dima
    I understand how Maven works with .java files in src/java/main. But may it be used for a more general case? Let us put it more abstract: Suppose I already have some a.exe that read some (not necessarily only .java) sources from directories A1, A2, A3 and puts some files (maybe some are generated .java) to directories B1, B2. I also have some b.exe that currently reads files from B1, B2, B3 and generates something else. Some more similar steps. (A real life problem stands behind). I would like to right POM.xml file so that maven will do this work. Is that possible? I assume that a.exe and b.exe should be warped as maven plugings. Next, in Maven docs I see : <build> <sourceDirectory>${basedir}/src/main/java</sourceDirectory> <scriptSourceDirectory>${basedir}/src/main/scripts</scriptSourceDirectory> <testSourceDirectory>${basedir}/src/test/java</testSourceDirectory> <outputDirectory>${basedir}/target/classes</outputDirectory> <testOutputDirectory>${basedir}/target/test-classes</testOutputDirectory> ... </build> What bothers me is that "sourceDirectory" looks by itself as a hard coded name. Will Maven accept A1 and A2 tags instead?

    Read the article

  • List of values as keys for a Map

    - by thr
    I have lists of variable length where each item can be one of four unique, that I need to use as keys for another object in a map. Assume that each value can be either 0, 1, 2 or 3 (it's not integer in my real code, but a lot easier to explain this way) so a few examples of key lists could be: [1, 0, 2, 3] [3, 2, 1] [1, 0, 0, 1, 1, 3] [2, 3, 1, 1, 2] [1, 2] So, to re-iterate: each item in the list can be either 0, 1, 2 or 3 and there can be any number of items in a list. My first approach was to try to hash the contents of the array, using the built in GetHashCode() in .NET to combine the hash of each element. But since this would return an int I would have to deal with collisions manually (two equal int values are identical to a Dictionary). So my second approach was to use a quad tree, breaking down each item in the list into a Node that has four pointers (one for each possible value) to the next four possible values (with the root node representing [], an empty list), inserting [1, 0, 2] => Foo, [1, 3] => Bar and [1, 0] => Baz into this tree would look like this: Grey nodes nodes being unused pointers/nodes. Though I worry about the performance of this setup, but there will be no need to deal with hash collisions and the tree won't become to deep (there will mostly be lists with 2-6 items stored, rarely over 6). Is there some other magic way to store items with lists of values as keys that I have missed?

    Read the article

  • Parallel features in .Net 4.0

    - by Jonathan.Peppers
    I have been going over the practicality of some of the new parallel features in .Net 4.0. Say I have code like so: foreach (var item in myEnumerable) myDatabase.Insert(item.ConvertToDatabase()); Imagine myDatabase.Insert is performing some work to insert to a SQL database. Theoretically you could write: Parallel.ForEach(myEnumerable, item => myDatabase.Insert(item.ConvertToDatabase())); And automatically you get code that takes advantage of multiple cores. But what if myEnumerable can only be interacted with by a single thread? Will the Parallel class enumerate by a single thread and only dispatch the result to worker threads in the loop? What if myDatabase can only be interacted with by a single thread? It would certainly not be better to make a database connection per iteration of the loop. Finally, what if my "var item" happens to be a UserControl or something that must be interacted with on the UI thread? What design pattern should I follow to solve these problems? It's looking to me that switching over to Parallel/PLinq/etc is not exactly easy when you are dealing with real-world applications.

    Read the article

  • Change form submission (enter to tab)

    - by user1298883
    I have a real basic form (code below) with a bunch of back-panel PhP. There is a scanner being used to input the data, but instead of tab after each item, it sends an "enter" command. Is it viable to add javascript to cause enter to instead tab to the next form field, and upon the last form field, submit it instead? I have found a few scripts online, but none that I have tried have worked in Firefox/Chrome. CODE: <html><head><title>Barcode Generation</title></head><body> <fieldset style="width: 300px;"> <form action="generator.php" method="post"> Invoice Number:<input type="text" name="invoice" /><br /> Model Number:<input type="text" name="model" /><br /> Serial Number:<input type="text" name="serial" /><br /> <input type="hidden" name="reload" value="true" /> <input type="submit" /> </form><br /><a href=null>en espanol</a></fieldset> </body></html>

    Read the article

  • rails restful select_tag with :on_change

    - by Sam
    So I'm finally starting to use rest in rails. I want to have a select_tag with product categories and when one of the categories is selected I want it to update the products on change. I did this before with <% form_for :category, :url => { :action => "show" } do |f| %> <%= select_tag :id, options_from_collection_for_select(Category.find(:all), :id, :name), { :onchange => "this.form.submit();"} %> <% end %> however now it doesn't work because it tries to do the show action. I have two controllers 1) products 2) product_categories products belongs_to product_categories with a has_many How should I do this. Since the products are listed on the products controller and index action should I use the products controller or should I use the product_categories controller to find the category such as in the show action and then render the product/index page. But the real problem I have is how to get this form or any other option to work with restful routes.

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Sneak peek at next generation Three MiFi unit – Huawei E585

    - by Liam Westley
    Last Wednesday I was fortunate to be invited to a sneak preview of the next generation Three MiFi unit, the Huawei E585. Many thanks to all those who posted questions both via this blog or via @westleyl on Twitter. I think I made sure I asked every question posed to the MiFi product manager from Three UK, and so here's the answers you were after. What is a MiFi? For those who are wondering, a MiFi unit is a 3G broadband modem combined with a WiFi access point, providing 3G broadband data access to up to five devices simultaneously via standard WiFi connections. What is different? It appears the prime task of enhancing the MiFi was to improve the user experience and user interface, both in terms of the device hardware and within the management software to configure the device.  I think this was a very sensible decision as these areas had substantial room for improvement. Single button operation to switch on, enable WiFi and connect to 3G Improved OELD display (see below), replacing the multi coloured LEDs; including signal strength, SMS notifications, the number of connected clients and data usage Management is via a web based dashboard accessible from any web browser. This is a big win for those running Linux, Mac OS/X, iPad users and, for me, as I can now configure the device from Windows 7 64-bit Charging is via micro USB, the new standard for small USB devices; you cannot use your old charger for the new MiFi unit Automatic reconnection when regaining a signal Improved charging time, which should allow recharging of the device when in use Although subjective, the black and silver design does look more classy than the silver and white plastic of the original MiFi What is the same? Virtually the same size and weight The battery is the same unit as the original MiFi so you’ll have a handy spare if you upgrade Data plans remain the same as the current MiFi, so cheapest price for upgraders will be £49 pay as you go Still only works on 3G networks, with no fallback to GPRS or EDGE There is no specific upgrade path for existing three customers, either from dongle or from the original MiFi My opinion I think three have concentrated on the correct areas of usability and user experience rather than trying to add new whizz bang technology features which aren’t of interest to mainstream users. The one button operation and the improved device display will make it much easier to use when out and about. If the automatic reconnection proves reliable that will remove a major bugbear that I experienced the previous evening when travelling on the First Great Western line from Paddington to Didcot Parkway.  The signal was repeatedly lost as we sped through tunnels and cuttings, and without automatic reconnection is was a real pain to keep pressing the data button on the MiFi to re-establish my data connection. And finally, the web based dashboard will mean I no longer need to resort to my XP based netbook to configure the SSID and password. My everyday laptop runs Windows 7 64-bit which appears to confuse the older 3 WiFi manager which cannot locate the MiFi when connected. Links to other sites, and other images of the device Good first impressions from Ben Smith, http://thereallymobileproject.com/2010/06/3uk-announce-a-new-mifi-with-a-screen/ Also, a round up of other sneak preview posts, http://www.3mobilebuzz.com/2010/06/11/mifi-round-two-your-view/ Pictures Here is a comparison of the old MiFi device next to the new device, complete with OLED display and the Huawei logo now being a prominent feature on the front of the device. One of my fellow bloggers had a Linux based netbook, showing off the web based dashboard complete with Text messages panel to manage SMS. And finally, I never thought that my blog sub title would ever end up printed onto a cup cake, ... and here's some of the other cup cakes ...

    Read the article

  • NDepend tool – Why every developer working with Visual Studio.NET must try it!

    - by hajan
    In the past two months, I have had a chance to test the capabilities and features of the amazing NDepend tool designed to help you make your .NET code better, more beautiful and achieve high code quality. In other words, this tool will definitely help you harmonize your code. I mean, you’ve probably heard about Chaos Theory. Experienced developers and architects are already advocates of the programming chaos that happens when working with complex project architecture, the matrix of relationships between objects which simply even if you are the one who have written all that code, you know how hard is to visualize everything what does the code do. When the application get more and more complex, you will start missing a lot of details in your code… NDepend will help you visualize all the details on a clever way that will help you make smart moves to make your code better. The NDepend tool supports many features, such as: Code Query Language – which will help you write custom rules and query your own code! Imagine, you want to find all your methods which have more than 100 lines of code :)! That’s something simple! However, I will dig much deeper in one of my next blogs which I’m going to dedicate to the NDepend’s CQL (Code Query Language) Architecture Visualization – You are an architect and want to visualize your application’s architecture? I’m thinking how many architects will be really surprised from their architectures since NDepend shows your whole architecture showing each piece of it. NDepend will show you how your code is structured. It shows the architecture in graphs, but if you have very complex architecture, you can see it in Dependency Matrix which is more suited to display large architecture Code Metrics – Using NDepend’s panel, you can see the code base according to Code Metrics. You can do some additional filtering, like selecting the top code elements ordered by their current code metric value. You can use the CQL language for this purpose too. Smart Search – NDepend has great searching ability, which is again based on the CQL (Code Query Language). However, you have some options to search using dropdown lists and text boxes and it will generate the appropriate CQL code on fly. Moreover, you can modify the CQL code if you want it to fit some more advanced searching tasks. Compare Builds and Code Difference – NDepend will also help you compare previous versions of your code with the current one at one of the most clever ways I’ve seen till now. Create Custom Rules – using CQL you can create custom rules and let NDepend warn you on each build if you break a rule Reporting – NDepend can automatically generate reports with detailed stats, graph representation, dependency matrixes and some additional advanced reporting features that will simply explain you everything related to your application’s code, architecture and what you’ve done. And that’s not all. As I’ve seen, there are many other features that NDepend supports. I will dig more in the upcoming days and will blog more about it. The team who built the NDepend have also created good documentation, which you can find on the NDepend website. On their website, you can also find some good videos that will help you get started quite fast. It’s easy to install and what is very important it is fully integrated with Visual Studio. To get you started, you can watch the following Getting Started Online Demo and Tutorial with explanations and screenshots. If you are interested to know more about how to use the features of this tool, either visit their website or wait for my next blogs where I will show some real examples of using the tool and how it helps make your code better. And the last thing for this blog, I would like to copy one sentence from the NDepend’s home page which says: ‘Hence the software design becomes concrete, code reviews are effective, large refactoring are easy and evolution is mastered.’ Website: www.ndepend.com Getting Started: http://www.ndepend.com/GettingStarted.aspx Features: http://www.ndepend.com/Features.aspx Download: http://www.ndepend.com/NDependDownload.aspx Hope you like it! Please do let me know your feedback by providing comments to my blog post. Kind Regards, Hajan

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >