Search Results

Search found 9199 results on 368 pages for 'topological sort'.

Page 41/368 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • How can I set a default sort for tables in PHPMyAdmin (i.e. always "Primary key - Descending")

    - by jeremyclarke
    Even though its obnoxious in a lot of ways I use PHPMyAdmin all the time to debug database issues while writing PHP. By default it sorts tables by primary key ascending. 99% of the time I would rather have the newest data (my test data) shown at the top by default rather than the useless first few records ever saved. Is there a way to configure PHPMyAdmin to show the newest records by default? To alter similar behavior?

    Read the article

  • How do I sort an activerecord result set on a i18n translated column?

    - by PlanetMaster
    Hi, I have the following line in a view: <%= f.select(:province_id, options_from_collection_for_select(Province.find(:all, :conditions => { :country_id => @property.country_id }, :order => "provinces.name ASC"), :id, :name) %> In the province model I have the following: def name I18n.t(super) end Problem is that the :name field is translated (through the province model) and that the ordering is done by activerecord on the english name. The non-english result set can be wrongly sorted this way. We have a province in Belgium called 'Oost-Vlaanderen'. In english that is 'East-Flanders". Not good for sorting:) I need something like this, but it does not work: <%= f.select(:province_id, options_from_collection_for_select(Province.find(:all, :conditions => { :country_id => @property.country_id }, :order => "provinces.I18n.t(name) ASC"), :id, :name) %> What would be the best approach to solve this? As you may have noticed, my coding knowledge is very limited, sorry for that.

    Read the article

  • What sort of schema can I use to accommodate manual date based data entries?

    - by meder
    I have an admin where users from multiple properties can enter in monthly statistics for twitter/facebook followers. We do not have access to the real data/db so this is why a manual entry. The form looks like this: Type ( radio, select **one** only ): - Twitter - Facebook Followers/Fans ( textfield ): Property (dropdown): Hotel A, Hotel B Date Start: mm/dd/yyyy (textfield) Date End: mm/dd/yyyy (textfield) Question 1.1: Since I am only keeping track of month per month, the date start/end fields which I have already created might be too specific. Would it be a better idea just to have a start month/year and and month/year if that's the only thing I care about? Question 1.2: What schema could I use for month to month statistics if I were to change the date start and end textfields to start month/year and end month/year dropdowns?

    Read the article

  • When the user first visits the page I want all the checkboxes to be checked in my index page. Below is the code from my controller and index.html.haml

    - by user1760920
    I want the checkbox to be checked when the user visits the page for the first time. -# This file is app/views/movies/index.html.haml %h1 All Movies = form_tag movies_path, :method => :get, :id => 'ratings_form' do Include: - @all_ratings.each do |rating| = rating = check_box_tag "ratings[#{rating}]", "1", @checked_ratings.include?(rating), :id => "ratings_#{rating}", = submit_tag 'Refresh', :id => 'ratings_submit' %table#movies %thead %tr %th{:class => ("hilite" if @sort == "title")}= link_to "Movie Title", movies_path( :sort => "title", :ratings => @checked_ratings), :id => "title_header" %th Rating %th{:class => ("hilite" if @sort == "release_date")}= link_to "Release Date", movies_path( :sort => "release_date", :ratings => @checked_ratings), :id => "release_date_header" %th More Info %tbody - @movies.each do |movie| %tr %td= movie.title %td= movie.rating %td= movie.release_date %td= link_to "More about #{movie.title}", movie_path(movie) = link_to 'Add new movie', new_movie_path #This is my Controller class MoviesController < ApplicationController def show id = params[:id] # retrieve movie ID from URI route @movie = Movie.find(id) # look up movie by unique ID # will render app/views/movies/show.<extension> by default end def index #get all the ratings available @all_ratings = Movie.all_ratings @checked_ratings = (params[:ratings].present? ? params[:ratings] : []) @sort = params[:sort] @movies = Movie.scoped if @sort && Movie.attribute_names.include?(@sort) @movies = @movies.order @sort end id @checked_ratings.empty? @checked_ratings = @all_ratings end unless @checked_ratings.empty? @movies = @movies.where :rating => @checked_ratings.keys end end def new # default: render 'new' template end def create @movie = Movie.create!(params[:movie]) flash[:notice] = "#{@movie.title} was successfully created." redirect_to movies_path end def edit @movie = Movie.find params[:id] end def update @movie = Movie.find params[:id] @movie.update_attributes!(params[:movie]) flash[:notice] = "#{@movie.title} was successfully updated." redirect_to movie_path(@movie) end def destroy @movie = Movie.find(params[:id]) @movie.destroy flash[:notice] = "Movie '#{@movie.title}' deleted." redirect_to movies_path end end In the controller, I set the @checked_rating to be @all_rating if the @checked.rating is empty but it does not do anything. I tried putting :checked = true in the index.html.haml on the check_box_tag but that makes the checkboxes checked everytime the page is refreshed. Everytime I check a particular checkbox and hit refresh button the page loads with all the checkboxes checked. Please help me with this. Thank you in Advance.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • sudo: apache restarting a service on CentOS

    - by WaveyDavey
    I need my web app to restart the dansguardian service (on CentOS) so it needs to run '/sbin/service dansguardian restart' I have a shellscript in /home/topological called apacherestart.sh which does the following: #!/bin/sh id=`id` /sbin/service dansguardian restart r=$? return $r This runs ok (logger statement in script for testing output to syslog, so I know it's running) To make it run, I put this in /etc/sudoers: User_Alias APACHE=www # Cmnd alias specification Cmnd_Alias HTTPRESTART=/home/topological/apacherestart.sh,/sbin/e-smith/db,/etc/rc7.d/S91dansguardian # Defaults specification # User privilege specification root ALL=(ALL) ALL APACHE ALL=(ALL) NOPASSWD: HTTPRESTART So far so good. But the service does not restart. To test this I created a user david, and fudged the uid/gid in /etc/passwd to be the same as www: www:x:102:102:e-smith web server:/home/e-smith:/bin/false david:x:102:102:David:/home/e-smith/files/users/david:/bin/bash then logged in as david and tried to run the apacherestart.sh. The problem I get is: /etc/rc7.d/S91dansguardian: line 51: /sbin/e-smith/db: Permission denied even though S91dansguardian and db are in the sudoers command list. Any ideas?

    Read the article

  • You Couldn’t Write it !! ( part 1 )

    - by GrumpyOldDBA
    This post was inspired by a developer and I think illustrates the gulf that can sometimes exist between IT and the business. I should point out that this post is the diplomatic version! Initially I was sent a simple search for a person with a question about why the query plan showed a sort when there was no sort in the query and why did the sort show it was 40% of the query. ( The point about the sort belongs to another post some time. ) Easy answer to the duration was that this was a leading wild...(read more)

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Sorting a REALLY BIG delimited text file in UNIX / VMS [closed]

    - by gunbuster363
    Hi everyone, I am going to sort a REALLY BIG delimited text file, say 250Mb (or a bunch of files of more or less than 250Mb) . It have 37 fields, and I need to sort it by 5 fields, for example 1st, 4th, 5th, 6th 7th fields. Under Unix / VMS, do I have a good option to do this FAST? I can write COBOL program. Now I am trying to sort them using the below command, but it already run for a long time and just not going to finished. Thank you. The command I used: time sort -t ',' -o sorted.txt +0 -1 +4 -5 +5 -6 +6 -7 +22 -23 *.DAT_gprscdr_ftpd

    Read the article

  • How to handle sorting of complex objects?

    - by AedonEtLIRA
    How would one sort a list of objects that have more than one sortable element? Suppose you have a simple object Car and car is a defined as such: class Car { public String make; public String model; public int year; public String color; // ... No methods, just a structure / container } I designed a simple framework that would allow for multiple SortOptions to be provided to a Sorter that would then sort the list. interface ISorter<T> { List<T> sort(List<T> items); void addSortOption(ISortOption<T> option); ISortOption<T>[] getSortOptions(); void setSortOption(ISortOption<T> option); } interface ISortOption<T> { String getLabel(); int compare(T t1, T t2); } Example use class SimpleStringSorter extends MergeSorter<String> { { addSorter(new AlphaSorter()); } private static final class AlphaSorter implements ISortOption<String> { // ... implementation of alpha compare and get label } } The issue with this solution is that it is not easily expandable. If car was to ever receive a new field, say, currentOwner. I would need to add the field, then track down the sorter class file, implement a new sort option class then recompile the application for redistribution. Is there an easier more expandable/practical way to sort data like this?

    Read the article

  • alphanumeric and special character sorting

    - by Kaushik Gopal
    Hi ppl, I wanted to know the different standards of sorting. To be more specific take the sample set: (Please note there's capitals, small letters, special characters, null values and numbers here) A a 3F Zx - 1Ad NULL How would the Oracle Database sort this by default? How would LINQ sort this by default? How would db2 sort this by default? (the following may get even more vague) How does the Windows platform sort this? (I mean say you have a couple of filenames, by default how would this get treated in a name sort) How does the *nix platform sort this? Is there some sort of standard for alphanumeric/special character sorting? The Windows operating system orders with numbers first, then alphabets. The Oracle database however treats alphabets first. I'm not sure of the *nix platform. It would be nice to have one place to know all these rules for the most common platforms (listed in questions above). Would the gurus throw some light on this topic? Cheers, K

    Read the article

  • function objects versus function pointers

    - by kumar_m_kiran
    Hi All, I have two questions related to function objects and function pointers, Question : 1 When I read the different uses sort algorithm of STL, I see that the third parameter can be a function objects, below is an example class State { public: //... int population() const; float aveTempF() const; //... }; struct PopLess : public std::binary_function<State,State,bool> { bool operator ()( const State &a, const State &b ) const { return popLess( a, b ); } }; sort( union, union+50, PopLess() ); Question : Now, How does the statement, sort(union, union+50,PopLess()) work? PopLess() must be resolved into something like PopLess tempObject.operator() which would be same as executing the operator () function on a temporary object. I see this as, passing the return value of overloaded operation i.e bool (as in my example) to sort algorithm. So then, How does sort function resolve the third parameter in this case? Question : 2 Question Do we derive any particular advantage of using function objects versus function pointer? If we use below function pointer will it derive any disavantage? inline bool popLess( const State &a, const State &b ) { return a.population() < b.population(); } std::sort( union, union+50, popLess ); // sort by population PS : Both the above references(including example) are from book "C++ Common Knowledge: Essential Intermediate Programming" by "Stephen C. Dewhurst". I was unable to decode the topic content, thus have posted for help. Thanks in advance for your help.

    Read the article

  • Sorting a multidimensional array in objective-c

    - by Zen_silence
    Hello, I'm trying to sort a multidimensional array in objective-c i know that i can sort a single dimensional array using the line of code below: NSArray *sortedArray = [someArray sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; I can't seem to figure out how to sort a 2D array like the one below: ( ("SOME_URL", "SOME_STORY_TITLE", "SOME_CATEGORY"), ("SOME_URL", "SOME_STORY_TITLE", "SOME_CATEGORY"), ("SOME_URL", "SOME_STORY_TITLE", "SOME_CATEGORY") ); If someone could provide me code that would sort the array by SOME_CATEGORY it would be of great help to me. Thanks, Zen_Silence

    Read the article

  • how does pipe work

    - by lego69
    hello, explain me please how exactly pipe works, for example I have this snippet of the code set line = ($<) while(${#line} != 0) if(${#line} == 5) then echo line | sort | ./calculate ${1} endif set line = ($<) end I need to choose all rows with 5 words and after sort it and after transfer, but I'm confused, how will it work, first of all 'while' will take all information and after that transfer it to sort, or every iteration 'while' will do sort? thanks in advance

    Read the article

  • Sorting a Data Gridview

    - by Muhammad Waqas
    Hi, I am a beginner to asp.net. I want to sort a gridview but the problem i m facing is when sort event handler is called the exception of stack over flow is thrown. Following is my code for sorting function. protected void sortGridView(string strSortExpression) { if (strSortExpression != string.Empty) { if (ViewState["sortOrder"] == "desc") { dgvBookInfo.Sort(strSortExpression, SortDirection.Ascending); //string.Format("{0}{1}", ); } else { dgvBookInfo.Sort(strSortExpression, SortDirection.Descending); } } } Thanks

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >