Search Results

Search found 1337 results on 54 pages for 'sorted'.

Page 10/54 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • sort django queryset by latest instance of a subset of related model

    - by rsp
    I have an Order model and order_event model. Each order_event has a foreignkey to order. so from an order instance i can get: myorder.order_event_set. I want to get a list of all orders but i want them to be sorted by the date of the last event. A statement like this works to sort by the latest event date: queryset = Order.objects.all().annotate(latest_event_date=Max('order_event__event_datetime')).order_by('latest_event_date') However, what I really need is a list of all orders sorted by latest date of A SUBSET OF EVENTS. For example my events are categorized into "scheduling", "processing", etc. So I should be able to get a list of all orders sorted by the latest scheduling event. This django doc (https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filter-and-exclude) shows how I can get the latest schedule event using a filter but this excludes orders without a scheduling event. I thought I could combine the filtered queryset with a queryset that includes back those orders that are missing a scheduling event...but I'm not quite sure how to do this. I saw answers related to using python list but it would be much more useful to have a proper django queryset (ie for a view with pagination, etc.)

    Read the article

  • Working with XML...

    - by gsvirdi
    I'm planning to create a news item which uses xml as it's backend and the Display should be like: Date: 08/Mar/2010 Title | News News 4 | Some news News 3 | Some news News 2 | Some news News 1 | Some news Date: 07/Mar/2010 Title | News News 5 | Some news News 4 | Some news News 3 | Some news News 2 | Some news News 1 | Some news Display should be sorted on Date (descending) Then news items should be sorted on time (descending) Today's news item should be on top, then titles should be sorted-decending (timewise), later will come previous day's news items. I'm not able to come up with the logic of xml which should be used in this case. moreover I'm not able to figure out how should I check "Today's date" in xml's "if" statement. ---- Previous Question ------------------------------------------------------------------ How can I export a data from textBox1, textBox2 & textBox3 on my winform (visual studio C#) so that it can automatically create an xml file with proper placing of these data???? Let's say: textBox1 = Name: textBox2 = Age: textBox3 = Roll No It would be great if the exported xml can be appended (add the new data in the EOF) if we export the data again. Any idea plz.....

    Read the article

  • First Letter Section Headers with Core Data

    - by Cory Imdieke
    I'm trying to create a list of people sorted in a tableView of sections with the first letter as the title for each section - a la Address Book. I've got it all working, though there is a bit of an issue with the sort order. Here is the way I'm doing it now: NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"Contact" inManagedObjectContext:context]]; NSSortDescriptor *fullName = [[NSSortDescriptor alloc] initWithKey:@"fullName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:fullName, nil]; [request setSortDescriptors:sortDescriptors]; [fullName release]; [sortDescriptors release]; NSError *error = nil; [resultController release]; resultController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:@"firstLetter" cacheName:nil]; [resultController performFetch:&error]; [request release]; fullName is a standard property, and firstLetter is a transient property which returns - as you'd expect - the first letter of the fullName. 95% of the time, this works perfectly. The problem is the result controller expects these two "lists" (the sorted fullName list and the sorted firstLetter list) to match exactly. If I have 2 contacts like John and Jack, my fullName list would sort these as Jack, John every time but my firstLetter list might sort them as John, Jack sometimes as it's only sorting by the first letter and leaving the rest to chance. When these lists don't match up, I get a blank tableView with 0 items in it. I'm not really sure how I should go about fixing this issue, but it's very frustrating. Has anyone else run into this? What did you guys find out?

    Read the article

  • Sorting an array of Doubles with NaN in it

    - by Agent Worm
    This is more of a 'Can you explain this' type of question than it is anything else. I came across a problem at work where we were using NaN values in a table, but when the table was sorted, it came out in a very strange, strange manner. I figured NaN was mucking up something so I wrote up a test application to see if this is true. This is what I did. static void Main(string[] args) { double[] someArray = { 4.0, 2.0, double.NaN, 1.0, 5.0, 3.0, double.NaN, 10.0, 9.0, 8.0 }; foreach (double db in someArray) { Console.WriteLine(db); } Array.Sort(someArray); Console.WriteLine("\n\n"); foreach (double db in someArray) { Console.WriteLine(db); } Console.ReadLine(); } Which gave the result: Before: 4,2,NaN,1,5,3,NaN,10,9,8 After: 1,4,NaN,2,3,5,8,9,10,NaN So yes, the NaN some how made the sorted array to be sorted in a strange way. To quote Fry; "Why is those things?"

    Read the article

  • Strategy for locale sensitive sort with pagination

    - by Thom Birkeland
    Hi, I work on an application that is deployed on the web. Part of the app is search functions where the result is presented in a sorted list. The application targets users in several countries using different locales (= sorting rules). I need to find a solution for sorting correctly for all users. I currently sort with ORDER BY in my SQL query, so the sorting is done according to the locale (or LC_LOCATE) set for the database. These rules are incorrect for those users with a locale different than the one set for the database. Also, to further complicate the issue, I use pagination in the application, so when I query the database I ask for rows 1 - 15, 16 - 30, etc. depending on the page I need. However, since the sorting is wrong, each page contains entries that are incorrectly sorted. In a worst case scenario, the entire result set for a given page could be out of order, depending on the locale/sorting rules of the current user. If I were to sort in (server side) code, I need to retrieve all rows from the database and then sort. This results in a tremendous performance hit given the amount of data. Thus I would like to avoid this. Does anyone have a strategy (or even technical solution) for attacking this problem that will result in correctly sorted lists without having to take the performance hit of loading all data? Tech details: The database is PostgreSQL 8.3, the application an EJB3 app using EJB QL for data query, running on JBoss 4.5.

    Read the article

  • L10N: Trusted test data for Locale Specific Sorting

    - by Chris Betti
    I'm working on an internationalized database application that supports multiple locales in a single instance. When international users sort data in the applications built on top of the database, the database theoretically sorts the data using a collation appropriate to the locale associated with the data the user is viewing. I'm trying to find sorted lists of words that meet two criteria: the sorted order follows the collation rules for the locale the words listed will allow me to exercise most / all of the specific collation rules for the locale I'm having trouble finding such trusted test data. Are such sort-testing datasets currently available, and if so, what / where are they? "words.en.txt" is an example text file containing American English text: Andrew Brian Chris Zachary I am planning on loading the list of words into my database in randomized order, and checking to see if sorting the list conforms to the original input. Because I am not fluent in any language other than English, I do not know how to create sample datasets like the following sample one in French (call it "words.fr.txt"): cote côte coté côté The French prefer diacritical marks to be ordered right to left. If you sorted that using code-point order, it likely comes out like this (which is an incorrect collation): cote coté côte côté Thank you for the help, Chris

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • Vector insert() causes program to crash

    - by wrongusername
    This is the first part of a function I have that's causing my program to crash: vector<Student> sortGPA(vector<Student> student) { vector<Student> sorted; Student test = student[0]; cout << "here\n"; sorted.insert(student.begin(), student[0]); cout << "it failed.\n"; ... It crashes right at the sorted part because I can see "here" on the screen but not "it failed." The following error message comes up: Debug Assertion Failed! (a long path here...) Expression: vector emplace iterator outside range For more information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. I'm not sure what's causing the problem now, since I have a similar line of code elsewhere student.insert(student.begin() + position(temp, student), temp); that does not crash (where position returns an int and temp is another declaration of a struct Student). What can I do to resolve the problem, and how is the first insert different from the second one?

    Read the article

  • Unsort: remembering a permutation and undoing it.

    - by dreeves
    Suppose I have a function f that takes a vector v and returns a new vector with the elements transformed in some way. It does that by calling function g that assumes the vector is sorted. So I want f to be defined like so: f[v_] := Module[{s, r}, s = Sort[v]; (* remember the permutation applied in order to sort v *) r = g[s]; Unsort[r] (* apply the inverse of that permutation *) ] What's the best way to do the "Unsort"? Or could we get really fancy and have this somehow work: answer = Unsort[g[Sort[v]]]; ADDED: Let's make this concrete with a toy example. Suppose we want a function f that takes a vector and transforms it by adding to each element the next smallest element, if any. That's easy to write if we assume the vector is sorted, so let's write a helper function g that makes that assumption: g[v_] := v + Prepend[Most@v, 0] Now for the function we really want, f, that works whether or not v is sorted: f[v_] := (* remember the order; sort it; call g on it; put it back in the original order; return it *)

    Read the article

  • How to Save data in txt file in MATLAB

    - by Jessy
    I have 3 txt files s1.txt, s2.txt, s3.txt.Each have the same format and number of data.I want to combine only the second column of each of the 3 files into one file. Before I combine the data, I sorted it according to the 1st column: UnSorted file: s1.txt s2.txt s3.txt 1 23 2 33 3 22 4 32 4 32 2 11 5 22 1 10 5 28 2 55 8 11 7 11 Sorted file: s1.txt s2.txt s3.txt 1 23 1 10 2 11 2 55 2 33 3 22 4 32 4 32 5 28 5 22 8 11 7 11 Here is the code I have so far: BaseFile ='s' n=3 fid=fopen('RT.txt','w'); for i=1:n %Open each file consecutively d(i)=fopen([BaseFile num2str(i)'.txt']); %read data from file A=textscan(d(i),'%f%f') a=A{1} b=A{2} ab=[a,b]; %sort the data according to the 1st column B=sortrows(ab,1); %delete the 1st column after being sorted B(:,1)=[] %write to a new file fprintf(fid,'%d\n',B'); %close (d(i)); end fclose(fid); How can I get the output in the new txt file in this format? 23 10 11 55 33 22 32 32 28 22 11 11 instead of this format? 23 55 32 22 10 33 32 11 11 22 28 11

    Read the article

  • SQL return ORDER BY result as an array

    - by EarthMind
    Is it possible to return groups as an associative array? I'd like to know if a pure SQL solution is possible. Note that I release that I could be making things more complex unnecessarily but this is mainly to give me an idea of the power of SQL. My problem: I have a list of words in the database that should be sorted alphabetically and grouped into separate groups according to the first letter of the word. For example: ape broom coconut banana apple should be returned as array( 'a' => 'ape', 'apple', 'b' => 'banana', 'broom', 'c' => 'coconut' ) so I can easily created sorted lists by first letter (i.e. clicking "A" only shows words starting with a, "B" with b, etc. This should make it easier for me to load everything in one query and make the sorted list JavaScript based, i.e. without having to reload the page (or use AJAX). Side notes: I'm using PostgreSQL but a solution for MySQL would be fine too so I can try to port it to PostgreSQL. Scripting language is PHP.

    Read the article

  • question about permutation problem

    - by davit-datuashvili
    i have posted similar problem here http://stackoverflow.com/questions/2920315/permutation-of-array but i want following we know that with length n there is n! possible permutation from which one such that all element are in order they are in sorted variant so i want break permutation when array is in order and print result but something is wrong i think that problem is repeated of permutation here is my code import java.util.*; public class permut{ public static Random r=new Random(); public static void display(int a[],int n){ for (int i=0;i<n;i++){ System.out.println(a[i]); } } public static void Permut(int a[],int n){ int j=0; int k=0; while (j<fact(n)){ int s=r.nextInt(n); for (int i=0;i<n;i++){ k=a[i]; a[i]=a[s]; a[s]=k; } j++; if (sorted(a,n)) display(a,n); break; } } public static void main(String[]args){ int a[]=new int[]{3,4,1,2}; int n=a.length; Permut(a,n); } public static int fact(int n){ if (n==0 || (n==1) ) return 1; return n*fact(n-1); } public static boolean sorted(int a[],int n ){ boolean flag=false; for (int i=0;i<n-1;i++){ if (a[i]<a[i+1]){ flag=true; } else{ flag=false; } } return flag; } } can anybody help me? result is nothing

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • $_POST Variable Detected in Chrome but not in Firefox

    - by user1707973
    I am using 2 images in a form to sort out query results from the database. The form is submitted using the POST method. When i click on the first image, the query results have to be sorted in ascending order, and when i click on the second, the results have to be sorted in the descending order. This is the code for the form: <form name="" action="" method="post"> <input type="hidden" name="typep" value="price" /> <input type="image" name="sort" value="asc" src="images/asc-ar.png" /> <input type="image" name="sort" value="desc" src="images/dsc-ar.png" /> </form> Now this is the code for checking if the $_REQUEST['sort'] variable is set and therefore whether sorting is required or not. if ($_REQUEST['sort'] != "") { $sort = $_REQUEST['sort']; $typep = $_REQUEST['typep']; //query to be executed depending on values of $sort and $typep } Firefox does detect the $_REQUEST['typep'] variable but not the $_REQUEST['sort'] one. This works perfectly in Chrome though. When i test the site in Firefox, it doesn't detect the $_REQUEST['sort'] variable and therefore the if condition evaluates to false and the search results don't get sorted.

    Read the article

  • Heapsort not working in Python for list of strings using heapq module

    - by VSN
    I was reading the python 2.7 documentation when I came across the heapq module. I was interested in the heapify() and the heappop() methods. So, I decided to write a simple heapsort program for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = map (int, user_input.split(",")) new_data = [] for i in range(len(data)): heapify(data) new_data.append(heappop(data)) print new_data This worked like a charm. To make it more interesting, I thought I would take away the integer conversion and leave it as a string. Logically, it should make no difference and the code should work as it did for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = user_input.split(",") new_data = [] for i in range(len(data)): heapify(data) print data new_data.append(heappop(data)) print new_data Note: I added a print statement in the for loop to see the heapified list. Here's the output when I ran the script: `$ python heapsort.py Enter numbers to be sorted: 4, 3, 1, 9, 6, 2 [' 1', ' 3', ' 2', ' 9', ' 6', '4'] [' 2', ' 3', '4', ' 9', ' 6'] [' 3', ' 6', '4', ' 9'] [' 6', ' 9', '4'] [' 9', '4'] ['4'] [' 1', ' 2', ' 3', ' 6', ' 9', '4']` The reasoning I applied was that since the strings are being compared, the tree should be the same if they were numbers. As is evident, the heapify didn't work correctly after the third iteration. Could someone help me figure out if I am missing something here? I'm running Python 2.4.5 on RedHat 3.4.6-9. Thanks, VSN

    Read the article

  • How can I create an array of random numbers in C++

    - by Nick
    Instead of The ELEMENTS being 25 is there a way to randomly generate a large array of elements....10000, 100000, or even 1000000 elements and then use my insertion sort algorithms. I am trying to have a large array of elements and use insertion sort to put them in order and then also in reverse order. Next I used clock() in the time.h file to figure out the run time of each algorithm. I am trying to test with a large amount of numbers. #define ELEMENTS 25 void insertion_sort(int x[],int length); void insertion_sort_reverse(int x[],int length); int main() { clock_t tStart = clock(); int B[ELEMENTS]={4,2,5,6,1,3,17,14,67,45,32,66,88, 78,69,92,93,21,25,23,71,61,59,60,30}; int x; cout<<"Not Sorted: "<<endl; for(x=0;x<ELEMENTS;x++) cout<<B[x]<<endl; insertion_sort(B,ELEMENTS); cout <<"Sorted Normal: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; insertion_sort_reverse(B,ELEMENTS); cout <<"Sorted Reverse: "<<endl; for(x=0;x<ELEMENTS;x++) cout<< B[x] <<endl; double seconds = clock() / double(CLK_TCK); cout << "This program has been running for " << seconds << " seconds." << endl; system("pause"); return 0; }

    Read the article

  • Why do browsers have so many possible exploits?

    - by Beau Martínez
    When browsing I am ocassionally given warnings about pages that host malware "that could damage my computer". I am seriously perplexed as to why, in 2010, browsers still have possible exploits and can be cracked. My question is "Why?". I'm assuming it's because of the quick development that occured in the browser wars which were unsufficiently tested, but I'm unsure. Surely WebKit would have patched all the issues in KHTML, or Gecko sorted out the flaws in Netscape's engine, and the IE coders sorted through their codebase to eliminate possible flaws? (Somewhat related: http://superuser.com/questions/117770/which-browser-is-the-most-secure-research-and-practically-based.)

    Read the article

  • Funky mail sorting and grouping in Outlook 2007

    - by laurie
    In outlook 2007 I group mails in a folder by subject with mails in each subject group sorted by Received date (newest to oldest) This works fine; I tick 'Subject' and 'Show in groups' in the context menu of the folder's table header. Life is good. But the subject groups in the mail folder are sorted alphabetically. I would like the group which has the newest mail to be the first group. Similar to how the arrange by 'Conversation' works Can this be done? I'm not averse to an add-in/macro type solution if anyone can point me at examples of implementing custom sorting in Outlook

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Default sort order in Windows folder

    - by Florian Müller
    I am running Windows 7 and I have a folder "Dropbox" inside my Documents folder. However, every time I restart Explorer and enter this folder, it is sorted by mutation date. I'd like to have it default sorted by name, which is still default in all other folders, however not in this one. Please let me know if there is a way to define this properly! Thanks in advance! Note: I actually found out that this is in every folder or library now, not only in some folders. This is the standard sort behaviour when I open explorer the first time.

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Thunderbird message sorting - wrong dates

    - by Unsigned
    Thunderbird 3 is having a problem with one of my IMAP accounts. When sorted by date, it displays 10-15 messages at the top (newest) when in fact they are very old message. (pre-2010, over a year old, some up to 6-7 years old) It's not game-breaking, but it is annoying, as I have to look past those to check for "new" messages. Is there any way to fix this? I'd considered re-downloading all the messages via IMAP, but that would take weeks (50,000 messages). When I open it, it displays the correct date, also when I use "View Source" the Date: header has the correct date (2009, 2006, etc). Is there any way to make Thunderbird realize that it should be sorted down with the rest of the 2006/2009 messages? None of my other IMAP accounts (many with the same host server) have this issue.

    Read the article

  • Flex XMLListCollection sort on nested tags

    - by gauravgr8
    Hi all, I have a requirement of sorting the <ename> in the XML with in the branch. The XML goes like this: <company> <branch> <name>finance</name> <emp> <ename>rahul</ename> <phno>123456</phno> </emp> <emp> <ename>sunil</ename> <phno>123456</phno> </emp> <emp> <ename>akash</ename> <phno>123456</phno> </emp> <emp> <ename>alok</ename> <phno>123456</phno> </emp> </branch> <branch> <name>finance</name> <emp> <ename>sameer</ename> <phno>123456</phno> </emp> <emp> <ename>rahul</ename> <phno>123456</phno> </emp> <emp> <ename>anand</ename> <phno>123456</phno> </emp> <emp> <ename>sandeep</ename> <phno>123456</phno> </emp> </branch> </company> I tried it with taking XML in XMLList: var xl:XMLList = new XMLList(branch.ename) var xlc:XMLListCollection = new XMLListCollection(xl); then applied sort to the <ename>. I am able to get the sorted but XMLListCollection but the problem is I got the <ename> collection sorted but I need the sorted <ename> in the XML. I tried with deleting the the item in Collection then adding the sorted list but in that case the <name> is lost. Please help me out in sorting <ename> or is there any way to specify nested tags in SortField name? Thanks in advance.

    Read the article

  • bubble sort on array of c structures not sorting properly

    - by xmpirate
    I have the following program for books record and I want to sort the records on name of book. the code isn't showing any error but it's not sorting all the records. #include "stdio.h" #include "string.h" #define SIZE 5 struct books{ //define struct char name[100],author[100]; int year,copies; }; struct books book1[SIZE],book2[SIZE],*pointer; //define struct vars void sort(struct books *,int); //define sort func main() { int i; char c; for(i=0;i<SIZE;i++) //scanning values { gets(book1[i].name); gets(book1[i].author); scanf("%d%d",&book1[i].year,&book1[i].copies); while((c = getchar()) != '\n' && c != EOF); } pointer=book1; sort(pointer,SIZE); //sort call i=0; //printing values while(i<SIZE) { printf("##########################################################################\n"); printf("Book: %s\nAuthor: %s\nYear of Publication: %d\nNo of Copies: %d\n",book1[i].name,book1[i].author,book1[i].year,book1[i].copies); printf("##########################################################################\n"); i++; } } void sort(struct books *pointer,int n) { int i,j,sorted=0; struct books temp; for(i=0;(i<n-1)&&(sorted==0);i++) //bubble sort on the book name { sorted=1; for(j=0;j<n-i-1;j++) { if(strcmp((*pointer).name,(*(pointer+1)).name)>0) { //copy to temp val strcpy(temp.name,(*pointer).name); strcpy(temp.author,(*pointer).author); temp.year=(*pointer).year; temp.copies=(*pointer).copies; //copy next val strcpy((*pointer).name,(*(pointer+1)).name); strcpy((*pointer).author,(*(pointer+1)).author); (*pointer).year=(*(pointer+1)).year; (*pointer).copies=(*(pointer+1)).copies; //copy back temp val strcpy((*(pointer+1)).name,temp.name); strcpy((*(pointer+1)).author,temp.author); (*(pointer+1)).year=temp.year; (*(pointer+1)).copies=temp.copies; sorted=0; } *pointer++; } } } My Imput The C Programming Language X Y Z 1934 56 Inferno Dan Brown 1993 453 harry Potter and the soccers stone J K Rowling 2012 150 Ruby On Rails jim aurther nil 2004 130 Learn Python Easy Way gmaps4rails 1967 100 And the output ########################################################################## Book: Inferno Author: Dan Brown Year of Publication: 1993 No of Copies: 453 ########################################################################## ########################################################################## Book: The C Programming Language Author: X Y Z Year of Publication: 1934 No of Copies: 56 ########################################################################## ########################################################################## Book: Ruby On Rails Author: jim aurther nil Year of Publication: 2004 No of Copies: 130 ########################################################################## ########################################################################## Book: Learn Python Easy Way Author: gmaps4rails Year of Publication: 1967 No of Copies: 100 ########################################################################## ########################################################################## Book: Author: Year of Publication: 0 No of Copies: 0 ########################################################################## We can see the above sorting is wrong? What I'm I doing wrong?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >