Search Results

Search found 9193 results on 368 pages for 'batcher sort'.

Page 11/368 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Sorting a list of variable length integers delimited by decimal points...

    - by brewerdc
    Hey guys, I'm in need of some help. I have a list of delimited integer values that I need to sort. An example: Typical (alpha?) sort: 1.1.32.22 11.2.4 2.1.3.4 2.11.23.1.2 2.3.7 3.12.3.5 Correct (numerical) sort: 1.1.32.22 2.1.3.4 2.3.7 2.11.23.1.2 3.12.3.5 11.2.4 I'm having trouble figuring out how to setup the algorithm to do such a sort with n number of decimal delimiters and m number of integer fields. Any ideas? This has to have been done before. Let me know if you need more information. Thanks a bunch! -Daniel

    Read the article

  • Sort a list of pointers.

    - by YuppieNetworking
    Hello all, Once again I find myself failing at some really simple task in C++. Sometimes I wish I could de-learn all I know from OO in java, since my problems usually start by thinking like Java. Anyways, I have a std::list<BaseObject*> that I want to sort. Let's say that BaseObject is: class BaseObject { protected: int id; public: BaseObject(int i) : id(i) {}; virtual ~BaseObject() {}; }; I can sort the list of pointer to BaseObject with a comparator struct: struct Comparator { bool operator()(const BaseObject* o1, const BaseObject* o2) const { return o1->id < o2->id; } }; And it would look like this: std::list<BaseObject*> mylist; mylist.push_back(new BaseObject(1)); mylist.push_back(new BaseObject(2)); // ... mylist.sort(Comparator()); // intentionally omitted deletes and exception handling Until here, everything is a-ok. However, I introduced some derived classes: class Child : public BaseObject { protected: int var; public: Child(int id1, int n) : BaseObject(id1), var(n) {}; virtual ~Child() {}; }; class GrandChild : public Child { public: GrandChild(int id1, int n) : Child(id1,n) {}; virtual ~GrandChild() {}; }; So now I would like to sort following the following rules: For any Child object c and BaseObject b, b<c To compare BaseObject objects, use its ids, as before. To compare Child objects, compare its vars. If they are equal, fallback to rule 2. GrandChild objects should fallback to the Child behavior (rule 3). I initially thought that I could probably do some casts in Comparator. However, this casts away constness. Then I thought that probably I could compare typeids, but then everything looked messy and it is not even correct. How could I implement this sort, still using list<BaseObject*>::sort ? Thank you

    Read the article

  • magento override sort order of product list by subcategories

    - by dardub
    I'm setting up a new store using 1.4.1 I'm trying sort the products in the list by the subcategories they belong to. At the top of list.phtml is $_productCollection=$this->getLoadedProductCollection(); I tried adding sort filters to that by adding the line $_productCollection->setOrder('category_ids', 'asc')->setOrder('name', 'asc'); I also tried addAttributeToSort instead of setOrder. This doesn't seem to have any effect. I'm guessing that $_productCollection is not a model I can sort in this manner. I have been digging around trying to find the correct place to apply the sort method without any success. Can someone tell me the proper place to do this?

    Read the article

  • What should students be taught first when first learning sorting algorithms?

    - by Johan
    If you were a programming teacher and you had to choose one sorting algorithm to teach your students which one would it be? I am asking for only one because I just want to introduce the concept of sorting. Should it be the bubble sort or the selection sort? I have noticed that these two are taught most often. Is there another type of sort that will explain sorting in an easier to understand way?

    Read the article

  • numeric sort with NSSortDescriptor for NSFetchedResultsController

    - by edziubudzik
    I'm trying to numerically sort data that's displayed in a UITableView. Before that I used such a sort descriptor: sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES selector:@selector(caseInsensitiveCompare:)]; now I'd like to use block to sort this numerically like this: sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES comparator:^NSComparisonResult(id obj1, id obj2) { return [((NSString *)obj1) compare:(NSString *)obj2 options:NSCaseInsensitiveSearch | NSNumericSearch]; }]; but it sorts the data incorectly causing conflict with section names in NSFetchedResultsController. So I tryed to immitate the old sorting with a comparator block - just to be sure that the problem is not caused by numeric comparison. The problem is that those lines: sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES comparator:^NSComparisonResult(id obj1, id obj2) { return [((NSString *)obj1) caseInsensitiveCompare:(NSString *)obj2]; }]; also cause the same error and I don't see why they won't sort the data in the same way the first method did... Any ideas?

    Read the article

  • Calculate distances and sort them

    - by Emir
    Hi guys, I wrote a function that can calculate the distance between two addresses using the Google Maps API. The addresses are obtained from the database. What I want to do is calculate the distance using the function I wrote and sort the places according to the distance. Just like "Locate Store Near You" feature in online stores. I'm going to specify what I want to do with an example: So, lets say we have 10 addresses in database. And we have a variable $currentlocation. And I have a function called calcdist(), so that I can calculate the distances between 10 addresses and $currentlocation, and sort them. Here is how I do it: $query = mysql_query("SELECT name, address FROM table"); while ($write = mysql_fetch_array($query)) { $distance = array(calcdist($currentlocation, $write["address"])); sort($distance); for ($i=0; $i<1; $i++) { echo "<tr><td><strong>".$distance[$i]." kms</strong></td><td>".$write['name']."</td></tr>"; } } But this doesn't work very well. It doesn't sort the numbers. Another challenge: How can I do this in an efficient way? Imagine there are infinite numbers of addresses; how can I sort these addresses and page them?

    Read the article

  • Insertion sort invariant assertion fails

    - by user1661211
    In the following code at the end of the for loop I use the assert function in order to test that a[i+1] is greater than or equal to a[i] but I get the following error (after the code below). Also in c++ the assert with the following seems to work just fine but in python (the following code) it does not seem to work...anyone know why? import random class Sorting: #Precondition: An array a with values. #Postcondition: Array a[1...n] is sorted. def insertion_sort(self,a): #First loop invariant: Array a[1...i] is sorted. for j in range(1,len(a)): key = a[j] i = j-1 #Second loop invariant: a[i] is the greatest value from a[i...j-1] while i >= 0 and a[i] > key: a[i+1] = a[i] i = i-1 a[i+1] = key assert a[i+1] >= a[i] return a def random_array(self,size): b = [] for i in range(0,size): b.append(random.randint(0,1000)) return b sort = Sorting() print sort.insertion_sort(sort.random_array(10)) The Error: Traceback (most recent call last): File "C:\Users\Albaraa\Desktop\CS253\Programming 1\Insertion_Sort.py", line 27, in <module> print sort.insertion_sort(sort.random_array(10)) File "C:\Users\Albaraa\Desktop\CS253\Programming 1\Insertion_Sort.py", line 16, in insertion_sort assert a[i+1] >= a[i] AssertionError

    Read the article

  • Sort std::vector by an element inside?

    - by user146780
    I currently have a std::vector which holds std::vector of double. I'd want to sort it by the second element of the double vectore. ex: instead of sorting by MyVec[0] or myvec[1] I wat it to sort myVec[0] and myvec[1] based on myvec[0][1] myvec[1][1]. Basically sort by a contained value, not the objects in it. so if myvec[0][1] is less than myvec[1][1] then myvec[0] and myvec[1] will swap. Thanks

    Read the article

  • c++ Sorting a vector based on values of other vector, or what's faster?

    - by pollux
    Hi, There are a couple of other posts about sorting a vector A based on values in another vector B. Most of the other answers tell to create a struct or a class to combine the values into one object and use std::sort. Though I'm curious about the performance of such solutions as I need to optimize code which implements bubble sort to sort these two vectors. I'm thinking to use a vector<pair<int,int>> and sort that. I'm working on a blob-tracking application (image analysis) where I try to match previously tracked blobs against newly detected blobs in video frames where I check each of the frames against a couple of previously tracked frames and of course the blobs I found in previous frames. I'm doing this at 60 times per second (speed of my webcam). Any advice on optimizing this is appreciated. The code I'm trying to optimize can be shown here: http://code.google.com/p/projectknave/source/browse/trunk/knaveAddons/ofxBlobTracker/ofCvBlobTracker.cpp?spec=svn313&r=313 Thanks

    Read the article

  • List files recursively and sort by modification time

    - by Problemaniac
    How do I list all files under a directory recursively and sort the output by modification time? I normally use ls -lhtc but it doesn't find all files recursively. I am using Linux and Mac. ls -l on Mac OS X can give -rw-r--r-- 1 fsr user 1928 Mar 1 2011 foo.c -rwx------ 1 fsr user 3509 Feb 25 14:34 bar.c where the date part isn't consistent or aligned, so a solution have to take this into account. Partial solution stat -f "%m%t%Sm %N" ./* | sort -rn | head -3 | cut -f2- works, but not recursively.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Why is my List.Sort method in C# reversing the order of my list?

    - by Fiona Holder
    I have a list of items in a generic list: A1 (sort index 1) A2 (sort index 2) B1 (sort index 3) B2 (sort index 3) B3 (sort index 3) The comparator on them takes the form: this.sortIndex.CompareTo(other.sortIndex) When I do a List.Sort() on the list of items, I get the following order out: A1 A2 B3 B2 B1 It has obviously worked in the sense that the sort indexes are in the right order, but I really don't want it to be re-ordering the 'B' items. Is there any tweak I can make to my comparator to fix this?

    Read the article

  • Unix/Linux find and sort by date modified

    - by Richard Easton
    How can I do a simple find which would order the results by most recently modified? Here is the current find I am using (I am doing a shell escape in PHP, so that is the reasoning for the variables): find '$dir' -name '$str'\* -print | head -10 How could I have this order the search by most recently modified? (Note I do not want it to sort 'after' the search, but rather find the results based on what was most recently modified.)

    Read the article

  • Sorting in Pivot Table on how data is summarized, not just the value

    - by user26453
    Often I am creating pivot tables that summarize some count by some category. Let's say I am counting Yes/No responses by some category. I usually add the count field and display it as a "% of row", and then create a pivot chart. However, if I want to sort one of the columns, say "Yes", Excel sorts by the underlying count, not the calculated percentage. Any way around this?

    Read the article

  • Sort Programs in StartMenu

    - by serhio
    Is there a way to (alphabetically or in other way) to sort program list in the Start=Programs menu? When installing / uninstalling programs this menu is added-removed, but not sorted, and sometimes takes time to find a nested program in this list. I have Windows XP, but think the same situation could be applied and on the other versions of Windows.

    Read the article

  • LibGDX onTouch() method Array and flip method

    - by johnny-b
    How can I add this on my application. i want to use the onTouch() method from the implementation of the InputProcessor to kill the enemies on screen. how do i do that? do i have to do anything to the enemy class? also i am trying to add a Array of enemies and it keeps throwing exceptions or the bullet now is facing LEFT <--- again after I used the flip method in the bullet class. All the code is below so please anyone feel free to have a look thanks. please help Thank you M // This is the bullet class. public class Bullet extends Sprite { public static final float BULLET_HOMING = 6000; public static final float BULLET_SPEED = 300; private Vector2 velocity; private float lifetime; private Rectangle bul; public Bullet(float x, float y) { velocity = new Vector2(0, 0); setPosition(x, y); AssetLoader.bullet1.flip(true, false); AssetLoader.bullet2.flip(true, false); setSize(AssetLoader.bullet1.getWidth(), AssetLoader.bullet1.getHeight()); bul = new Rectangle(); } public void update(float delta) { float targetX = GameWorld.getBall().getX(); float targetY = GameWorld.getBall().getY(); float dx = targetX - getX(); float dy = targetY - getY(); float distToTarget = (float) Math.sqrt(dx * dx + dy * dy); dx /= distToTarget; dy /= distToTarget; dx *= BULLET_HOMING; dy *= BULLET_HOMING; velocity.x += dx * delta; velocity.y += dy * delta; float vMag = (float) Math.sqrt(velocity.x * velocity.x + velocity.y * velocity.y); velocity.x /= vMag; velocity.y /= vMag; velocity.x *= BULLET_SPEED; velocity.y *= BULLET_SPEED; bul.set(getX(), getY(), getOriginX(), getOriginY()); Vector2 v = velocity.cpy().scl(delta); setPosition(getX() + v.x, getY() + v.y); setOriginCenter(); setRotation(velocity.angle()); } public Rectangle getBounds() { return bul; } public Rectangle getBounds1() { return this.getBoundingRectangle(); } } // This is the class where i load all the images from public class AssetLoader { public static Texture texture; public static TextureRegion bg, ball1, ball2; public static Animation bulletAnimation, ballAnimation; public static Sprite bullet1, bullet2; public static void load() { texture = new Texture(Gdx.files.internal("SpriteN1.png")); texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest); bg = new TextureRegion(texture, 80, 421, 395, 30); bg.flip(false, true); ball1 = new TextureRegion(texture, 0, 321, 32, 32); ball1.flip(false, true); ball2 = new TextureRegion(texture, 32, 321, 32, 32); ball2.flip(false, true); bullet1 = new Sprite(texture, 380, 350, 45, 20); bullet1.flip(false, true); bullet2 = new Sprite(texture, 425, 350, 45, 20); bullet2.flip(false, true); TextureRegion[] balls = { ball1, ball2 }; ballAnimation = new Animation(0.16f, balls); ballAnimation.setPlayMode(Animation.PlayMode.LOOP); } Sprite[] bullets = { bullet1, bullet2 }; bulletAnimation = new Animation(0.06f, aims); bulletAnimation.setPlayMode(Animation.PlayMode.LOOP); } public static void dispose() { texture.dispose(); } // This is for the rendering or drawing onto the screen/canvas. public class GameRenderer { private Bullet bullet; private Ball ball; public GameRenderer(GameWorld world) { myWorld = world; cam = new OrthographicCamera(); cam.setToOrtho(true, 480, 320); batcher = new SpriteBatch(); // Attach batcher to camera batcher.setProjectionMatrix(cam.combined); shapeRenderer = new ShapeRenderer(); shapeRenderer.setProjectionMatrix(cam.combined); // Call helper methods to initialize instance variables initGameObjects(); initAssets(); } private void initGameObjects() { ball = GameWorld.getBall(); bullet = myWorld.getBullet(); scroller = myWorld.getScroller(); } private void initAssets() { ballAnimation = AssetLoader.ballAnimation; bulletAnimation = AssetLoader.bulletAnimation; } public void render(float runTime) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT); batcher.begin(); batcher.disableBlending(); batcher.enableBlending(); batcher.draw(AssetLoader.ballAnimation.getKeyFrame(runTime), ball.getX(), ball.getY(), ball.getWidth(), ball.getHeight()); batcher.draw(AssetLoader.bulletAnimation.getKeyFrame(runTime), bullet.getX(), bullet.getY(), bullet.getOriginX(), bullet.getOriginY(), bullet.getWidth(), bullet.getHeight(), 1.0f, 1.0f, bullet.getRotation()); // End SpriteBatch batcher.end(); } } // this is to load the image etc on the screen i guess public class GameWorld { public static Ball ball; private Bullet bullet; private ScrollHandler scroller; public GameWorld() { ball = new Ball(480, 273, 32, 32); bullet = new Bullet(10, 10); scroller = new ScrollHandler(0); } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet() { return bullet; } } //This is the input handler class public class InputHandler implements InputProcessor { private Ball myBall; private Bullet bullet; private GameRenderer aims; // Ask for a reference to the Soldier when InputHandler is created. public InputHandler(Ball ball) { myBall = ball; } @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { return false; } @Override public boolean keyDown(int keycode) { return false; } @Override public boolean keyUp(int keycode) { return false; } @Override public boolean keyTyped(char character) { return false; } @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { return false; } @Override public boolean touchDragged(int screenX, int screenY, int pointer) { return false; } @Override public boolean mouseMoved(int screenX, int screenY) { return false; } @Override public boolean scrolled(int amount) { return false; } } i am rendering all graphics in a GameRender class and a gameworld class if you need more info please let me know I am trying to make the array work but keep finding that when an array is initialized then the bullet fips back to the original and ends up being backwards???? and if I create an array I keep getting Exceptions throw??? Thank you for any help given.

    Read the article

  • Sort highest match in excel

    - by Chris
    Hello, How can in Excel sort multiple collumns with data? Column B = Subscribe date Column A = Subscribe name I have multiple columns with a lot of doubles names (A) and different subscribe dates(B) How can this be sorted that all names are sorted, but the highest subscribe date is flagged as HIGHEST in column C. In this way you can see directly wich is the highest date.

    Read the article

  • Automatic sort for excel worksheet

    - by Joseph
    I want to create a to-do list in Excel that automatically sorts the to-do entries in a list, in order of ones to do first (closest deadlines). I would also like a section that shows the tasks for today and another for high-priority tasks coming up within a week. I have not programmed in Excel before. I know Python and JavaScript, but want an Excel solution that runs inside Excel (maybe using VBA, the Excel programming language). Is this sort of thing possible in Excel?

    Read the article

  • VBA compare and sort strings with quirky characters

    - by Smandoli
    I am comparing text values from two DAO recordsets in MS Access. I sort on the text field, then go through both recordsets comparing the values from each. The sets are substantially different and while they're mostly alpha-numeric, spaces and symbols like hyphens and periods are very common. My program depends on predictable sorting and fool-proof comparing. But unfortunately, the sort will rank two values differently than the comparison function. StrComp is the obvious first choice: varResult = StrComp(Val_1, Val_2) RFA-300 14.9044 RFA300 14-2044 But for the two pairs above, StrComp returns a different value than one would expect based on the sort. Including vbTextCompare or vbBinaryCompare affects StrComp's result, but not so as to solve the problem. Note the values must always be compared as strings. Of course I make sure that "14-2044" and "14.9044" aren't evaluated as -2030 and ~15. That's not the cause of my problem. I learned API-based functions are more reliable for quirky texts, so I tried these: varResult = CompareString(LOCALE_SYSTEM_DEFAULT, _ SORT_STRINGSORT, strVal_2, -1, strVal_1, -1) varResult = CompareString(LOCALE_SYSTEM_DEFAULT, _ NORM_IGNOREWIDTH, strVal_2, -1, strVal_1, -1) The first one returns the opposite of StrComp. The second one returns the same as StrComp. But neither yields a result that is consistent with the sort order. (NORM_IGNOREWIDTH is probably not relevant, but I needed a place-holder substitute and it looked as good as any.) UPDATE: This is a complete rewrite of the original post, deleting all the info about why I really need this -- just take my word for it and enjoy the brevity.

    Read the article

  • linq - how to sort a list

    - by Billy Logan
    Hello everyone, I have a linq query that populates a list of designers. since i am using the filters below my sorting is not functioning properly. My question is with the given code below how can i best sort this List after the fact or sort while querying? I have tried to sort the list after the fact using the following script but i receive a compiler error: List<TBLDESIGNER> designers = new List<TBLDESIGNER>(); designers = 'calls my procedure below and comes back with an unsorted list of designers' designers.Sort((x, y) => string.Compare(x.FIRST_NAME, y.LAST_NAME)); Query goes as follows: List<TBLDESIGNER> designer = null; using (SOAE strikeOffContext = new SOAE()) { //Invoke the query designer = AdminDelegates.selectDesignerDesigns.Invoke(strikeOffContext).ByActive(active).ByAdmin(admin).ToList(); } Delegate: public static Func<SOAE, IQueryable<TBLDESIGNER>> selectDesignerDesigns = CompiledQuery.Compile<SOAE, IQueryable<TBLDESIGNER>>( (designer) => from c in designer.TBLDESIGNER.Include("TBLDESIGN") orderby c.FIRST_NAME ascending select c); Filter ByActive: public static IQueryable<TBLDESIGNER> ByActive(this IQueryable<TBLDESIGNER> qry, bool active) { //Return the filtered IQueryable object return from c in qry where c.ACTIVE == active select c; } Filter ByAdmin: public static IQueryable<TBLDESIGNER> ByAdmin(this IQueryable<TBLDESIGNER> qry, bool admin) { //Return the filtered IQueryable object return from c in qry where c.SITE_ADMIN == admin select c; } Thanks in advance, Billy

    Read the article

  • How to sort a date array in PHP

    - by Click Upvote
    I have an array in this format: Array ( [0] => Array ( [28th February, 2009] => 'bla' ) [1] => Array ( [19th March, 2009] => 'bla' ) [2] => Array ( [5th April, 2009] => 'bla' ) [3] => Array ( [19th April, 2009] => 'bla' ) [4] => Array ( [2nd May, 2009] => 'bla' ) ) I want to sort them out in the ascending order of the dates (based on the month, day, and year). What's the best way to do that? Originally the emails are being fetched in the MySQL date format, so its possible for me to get the array in this state: Array [ ['2008-02-28']='some text', ['2008-03-06']='some text' ] Perhaps when its in this format, I can loop through them, remove all the '-' (hyphen) marks so they are left as integars, sort them using array_sort() and loop through them yet again to sort them? Would prefer if there was another way as I'd be doing 3 loops with this per user. Thanks. Edit: I could also do this: $array[$index]=array('human'=>'28 Feb, 2009', 'db'=>'20080228', 'description'=>'Some text here'); But using this, would there be any way to sort the array based on the 'db' element alone? Edit 2: Updated initial var_dump

    Read the article

  • How to sort an XML file by date in XLST

    - by AdRock
    I am trying to sort by date and get an error message about the stylesheet can't be loaded I found an answer on how others have suggested but it doesn't work for me Here is where it is supposed to sort. The commented out line is where the sort should occur <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Festival Organisers and Festivals</title> <link rel="stylesheet" type="text/css" href="userfestival.css" /> </head> <body> <h1>Registered Festival Organisers and Festivals</h1> <xsl:for-each select="folktask/member"> <xsl:if test="user/account/userlevel='3'"> <!--<xsl:sort select="concat(substring(festival/event/datefrom,1,4),substring(festival/event/datefrom, 6,2),substring(festival/event/datefrom, 9,2))" data-type="number" order="ascending"/>--> Sample node from XML <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> </festival>

    Read the article

  • barebones sort algorithm

    - by user309322
    i have been asked to make a simple sort aglorithm to sort a random series of 6 numbers into numerical order. However i have been asked to do this using "Barebones" a theoretical language put forward in the Book Computer Science an overview. Some information on the language can be found here http://www.brouhaha.com/~eric/software/barebones/ Just to clarify i am a student teacher and have been doing anaysis on "mini-programing languages" and their uses in a teaching environment, i suggested to my tutor that i look at barebones and asked what sort of exmaple program i should write . He suggested a simple sort algorithm. Now since looking at the language i cant understand how i can do this without using arrays and if statements. The code to swap the value of variables would be while a not 0 do; incr Aux1; decr a; end; while b not 0 do; incr Aux2 decr b end; while Aux1 not 0 do; incr a; decr Aux1; end; while Aux2 not 0 do; incr b; decr Aux2; end; however the language does not provide < or operators

    Read the article

  • Sort Dictionary<> on value, lookup index from key

    - by paulio
    Hi, I have a Dictionary< which I want to sort based on value so I've done this by putting the dictionary into a List< then using the .Sort method. I've then added this back into a Dictionary<. Is it possible to lookup the new index/order by using the Dictionary key?? Dictionary<int, MyObject> toCompare = new Dictionary<int, MyObject>(); toCompare.Add(0, new MyObject()); toCompare.Add(1, new MyObject()); toCompare.Add(2, new MyObject()); Dictionary<int, MyObject> items = new Dictionary<int, MyObject>(); List<KeyValuePair<int, MyObject>> values = new List<KeyValuePair<int, MyObject>> (toCompare); // Sort. values.Sort(new MyComparer()); // Convert back into a dictionary. foreach(KeyValuePair<int, PropertyAppraisal> item in values) { // Add to collection. items.Add(item.Key, item.Value); } // THIS IS THE PART I CAN'T DO... int sortedIndex = items.GetItemIndexByKey(0);

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >