Search Results

Search found 3659 results on 147 pages for 'sorted hash'.

Page 136/147 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • tablednd post issue help please

    - by netrise
    Hi plz i got a terrible headache my script is very simple Why i can’t get $_POST['table-2'] after submiting update button, i want to get ID numbers sorted # index.php <head> <script src="jquery.js" type="text/javascript"></script><br /> <script src="jquery.tablednd.js" type="text/javascript"></script><br /> <script src="jqueryTableDnDArticle.js" type="text/javascript"></script><br /> </head> <body> <form method='POST' action=index.php> <table id="table-2" cellspacing="0" cellpadding="2"> <tr id="a"><td>1</td><td>One</td><td><input type="text" name="one" value="one"/></td></tr> <tr id="b"><td>2</td><td>Two</td><td><input type="text" name="two" value="two"/></td></tr> <tr id="c"><td>3</td><td>Three</td><td><input type="text" name="three" value="three"/></td></tr> <tr id="d"><td>4</td><td>Four</td><td><input type="text" name="four" value="four"/></td></tr> <tr id="e"><td>5</td><td>Five</td><td><input type="text" name="five" value="five"/></td></tr> </table> <input type="submit" name="update" value="Update"> </form> <?php $result[] = $_POST['table-2']; foreach($result as $value) { echo "$value<br/>"; } ?> </body> # jqueryTableDnDArticle.js …………. $(“#table-2?).tableDnD({ onDragClass: “myDragClass”, onDrop: function(table, row) { var rows = table.tBodies[0].rows; var debugStr = “Row dropped was “+row.id+”. New order: “; for (var i=0; i<rows.length; i++) { debugStr += rows[i].id+" "; } //$("#debugArea").html(debugStr); $.ajax({ type: "POST", url: "index.php", data: $.tableDnD.serialize(), success: function(html){ alert("Success"); } }); }, onDragStart: function(table, row) { $("#debugArea").html("Started dragging row "+row.id); } });

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Loading images to UIScrollview crashes

    - by Icky
    Hello All. I have a Navigationcontroller pushing a UIViewController with a scrollview inside. Within the scrollview I download a certain number of images around 20 (sometimes more) each sized around 150 KB. All these images are added to the scrollview so that their origin is x +imageSize and the following is sorted right to the one before. All in all I think its a lot of data (3-4 MB). On an I pod Touch this sometimes crashes, the IPhone can handle it once, if it has to load the data again (some other images) , it crashes too. I guess its a memory issue but within my code, I download the image, save it to a file on the phone as NSData, read it again from file and add it to a UIImageview which I release. So I have freed the memory I allocated, nevertheless it still crashes. Can anyone help me out? Since Im new to this, I dont know the best way to handle the Images in a scrollview. Besides I create the controller at start from nib, which means I dont have to release it, since I dont use alloc - right? Code: In my rootviewcontroller I do: -(void) showImages { [[self naviController] pushViewController:imagesViewController animated:YES]; [imagesViewController viewWillAppear:YES]; } Then in my Controller handling the scroll View, this is the method to load the images: - (void) loadOldImageData { for (int i = 0; i < 40 ; i++) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *filePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"img%d.jpg", i]]; NSData *myImg = [NSData dataWithContentsOfFile:filePath]; UIImage *im = [UIImage imageWithData:myImg]; if([im isKindOfClass:[UIImage class]]) { NSLog(@"IM EXISTS"); UIImageView *imgView = [[UIImageView alloc] initWithImage:im]; CGRect frame = CGRectMake(i*320, 0, 320, 416); imgView.frame = frame; [myScrollView addSubview:imgView]; [imgView release]; //NSLog(@"Adding img %d", i); numberImages = i; NSLog(@"setting numberofimages to %d", numberImages); //NSLog(@"scroll subviews %d", [myScrollView.subviews count]); } } myScrollView.contentSize = CGSizeMake(320 * (numberImages + 1), 416); }

    Read the article

  • How to loop an array with strings as indexes in PHP

    - by Axel Lambregts
    I had to make an array with as indexes A-Z (the alphabet). Each index had to have a value 0. So i made this array: $alfabet = array( 'A' => 0, 'B' => 0, 'C' => 0, 'D' => 0, 'E' => 0, 'F' => 0, 'G' => 0, 'H' => 0, 'I' => 0, 'J' => 0, 'K' => 0, 'L' => 0, 'M' => 0, 'N' => 0, 'O' => 0, 'P' => 0, 'Q' => 0, 'R' => 0, 'S' => 0, 'T' => 0, 'U' => 0, 'V' => 0, 'W' => 0, 'X' => 0, 'Y' => 0, 'Z' => 0 ); I also have got text from a file ($text = file_get_contents('tekst15.txt');) I have putted the chars in that file to an array: $textChars = str_split ($text); and sorted it from A-Z: sort($textChars); What i want is that (with a for loop) when he finds an A in the textChars array, the value of the other array with index A, goes up by one (so like: $alfabet[A]++; Can anyone help me with this loop? I have this atm: for($i = 0; $i <= count($textChars); $i++){ while($textChars[$i] == $alfabet[A]){ $alfabet[A]++; } } echo $alfabet[A]; Problem 1: i want to loop the alfabet array to, so now i only check for A but i want to check all indexes. Problem2: this now returns 7 for each alphabet index i try so its totally wrong :) I'm sorry about my english but thanks for your time.

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

  • C++ string sort like a human being?

    - by Walter Nissen
    I would like to sort alphanumeric strings the way a human being would sort them. I.e., "A2" comes before "A10", and "a" certainly comes before "Z"! Is there any way to do with without writing a mini-parser? Ideally it would also put "A1B1" before "A1B10". I see the question "Natural (human alpha-numeric) sort in Microsoft SQL 2005" with a possible answer, but it uses various library functions, as does "Sorting Strings for Humans with IComparer". Below is a test case that currently fails: #include <set> #include <iterator> #include <iostream> #include <vector> #include <cassert> template <typename T> struct LexicographicSort { inline bool operator() (const T& lhs, const T& rhs) const{ std::ostringstream s1,s2; s1 << toLower(lhs); s2 << toLower(rhs); bool less = s1.str() < s2.str(); std::cout<<s1.str()<<" "<<s2.str()<<" "<<less<<"\n"; return less; } inline std::string toLower(const std::string& str) const { std::string newString(""); for (std::string::const_iterator charIt = str.begin(); charIt!=str.end();++charIt) { newString.push_back(std::tolower(*charIt)); } return newString; } }; int main(void) { const std::string reference[5] = {"ab","B","c1","c2","c10"}; std::vector<std::string> referenceStrings(&(reference[0]), &(reference[5])); //Insert in reverse order so we know they get sorted std::set<std::string,LexicographicSort<std::string> > strings(referenceStrings.rbegin(), referenceStrings.rend()); std::cout<<"Items:\n"; std::copy(strings.begin(), strings.end(), std::ostream_iterator<std::string>(std::cout, "\n")); std::vector<std::string> sortedStrings(strings.begin(), strings.end()); assert(sortedStrings == referenceStrings); }

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

  • How can I sort an array, yet exclude certain elements (to be kept at the same position in the array)

    - by calumbrodie
    This will be implemented in Javascript (jQuery) but I suppose the method could be used in any language. I have an array of items and I need to perform a sort. However there are some items in the array that have to be kept in the same position (same index). The array in question is build from a list of <li> elements and I'm using .data() values attached to the list item as the value on which to sort. What approach would be best here? <ul id="fruit"> <li class="stay">bananas</li> <li>oranges</li> <li>pears</li> <li>apples</li> <li class="stay">grapes</li> <li>pineapples</li> </ul> <script type="text/javascript"> var sugarcontent = new Array('32','21','11','45','8','99'); $('#fruit li').each(function(i,e){ $(this).data(sugarcontent[i]); }) </script> I want the list sorted with the following result... <ul id="fruit"> <li class="stay">bananas</li> <!-- score = 32 --> <li>pineapples</li> <!-- score = 99 --> <li>apples</li> <!-- score = 45 --> <li>oranges</li> <!-- score = 21 --> <li class="stay">grapes</li> <!-- score = 8 --> <li>pears</li> <!-- score = 11 --> </ul> Thanks!

    Read the article

  • MySQL Hibernate sort on 2 columns

    - by sammichy
    I have a table as follows Table item { ID - Primary Key content - String published_date - When the content was published create_date - When this database entry was created } Every hour (or specified time interval) I run a process to update this table with data from different sources (websites). I want to display the results according to the following rules. 1. The entries created each time the process runs should be grouped together. So the entries from the 2nd process run will always be after the entries from the first process run even if the published_date of an entry from the first run is after the published_date of an entry from the 2nd run. 2. Within the grouping by run, the entries by sorted by published_date 3. Another restriction is that I prefer that data from the same source not be grouped together. If I do the sort by create_date, published_date I will end up with data from source a, data from source b etc. I prefer that the data within each hour be mixed up for better presentation If I add a column to this table and store a counter which increments each time the process is run, it is possible to create a query to sort first by counter and then by published_dt. Is there a way to do it without adding a field? I'm using Hibernate over MySQL. e.g. Hour 1 (run 1) 4 rows collected from site a (rows 1-4) 3 rows collected from site b (rows 5-7) hour 2 (run 2) 2 row collected from site a (rows 8-9) 3 rows collected from site b (rows 10-12) ... After each run, new records are added to the database from each website. The create date is the time when the record was created in the database. The published date is part of the content and is read in from the external source. When the results are displayed I would like rows to be grouped together based on the hour they were published in. So rows 1-7 would be displayed before rows 8-12. Within each hourly grouping, I would like to sort the results by published date (timestamp). This is necessary so that the posts from all the sites collected in that hour are not grouped together but rather mixed in with each other.

    Read the article

  • Haskell Add Function Return to List Until Certain Length

    - by kienjakenobi
    I want to write a function which takes a list and constructs a subset of that list of a certain length based on the output of a function. If I were simply interested in the first 50 elements of the sorted list xs, then I would use fst (splitAt 50 (sort xs)). However, the problem is that elements in my list rely on other elements in the same list. If I choose element p, then I MUST also choose elements q and r, even if they are not in the first 50 elements of my list. I am using a function finderFunc which takes an element a from the list xs and returns a list with the element a and all of its required elements. finderFunc works fine. Now, the challenge is to write a function which builds a list whose total length is 50 based on multiple outputs of finderFunc. Here is my attempt at this: finish :: [a] -> [a] -> [a] --This is the base case, which adds nothing to the final list finish [] fs = [] --The function is recursive, so the fs variable is necessary so that finish -- can forward the incomplete list to itself. finish ps fs -- If the final list fs is too small, add elements to it | length fs < 50 && length (fs ++ newrs) <= 50 = fs ++ finish newps newrs -- If the length is met, then add nothing to the list and quit | length fs >= 50 = finish [] fs -- These guard statements are currently lacking, not the main problem | otherwise = finish [] fs where --Sort the candidate list sortedps = sort ps --(finderFunc a) returns a list of type [a] containing a and all the -- elements which are required to go with it. This is the interesting -- bit. rs is also a subset of the candidate list ps. rs = finderFunc (head sortedps) --Remove those elements which are already in the final list, because -- there can be overlap newrs = filter (`notElem` fs) rs --Remove the elements we will add to the list from the new list -- of candidates newps = filter (`notElem` rs) ps I realize that the above if statements will, in some cases, not give me a list of exactly 50 elements. This is not the main problem, right now. The problem is that my function finish does not work at all as I would expect it to. Not only does it produce duplicate elements in the output list, but it sometimes goes far above the total number of elements I want to have in the list. The way this is written, I usually call it with an empty list, such as: finish xs [], so that the list it builds on starts as an empty list.

    Read the article

  • C++: incorrect swapping of nodes in linked list

    - by Dragon
    I have 2 simple structures: struct Address { char city[255]; }; typedef Address* AddressPtr; struct Person { char fullName[255]; Address* address; Person* next; }; typedef Person* PersonPtr; The Person structure forms the Linked list where new elements are added to the beginning of the list. What I want to do is to sort them by fullName. At first I tried to swap links, but I lost the beginning of the list and as a result my list was sorted partially. Then I decided to sort list by swapping the values of nodes. But I get strange results. For a list with names: Test3, Test2, Test1, I get Test3, Test3, Test3. Here is my sorting code: void sortByName(PersonPtr& head) { TaskPtr currentNode, nextNode; for(currentNode = head; currentNode->next != NULL; currentNode = currentNode->next) { for(nextNode = currentNode->next; nextNode != NULL; nextNode = nextNode->next) { if(strcmp(currentNode->fullName, nextNode->fullName) > 0) { swapNodes(currentNode, nextNode); } } } } void swapNodes(PersonPtr& node1, PersonPtr& node2) { PersonPtr temp_node = node2; strcpy(node2->fullName, node1->fullName); strcpy(node1->fullName, temp_node->fullName); strcpy(node2->address->city, node1->address->city); strcpy(node1->address->city, temp_node->address->city); } After the sorting completion, nodes values are a little bit strange. UPDATED This is how I swapped links void swapNodes(PersonPtr& node1, PersonPtr& node2) { PersonPtr temp_person; AddressPtr temp_address; temp_person = node2; node2 = node1; node1 = temp_person; temp_address = node2->address; node2->address = node1->address; node1->address = temp_address; }

    Read the article

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • Java Sorting "queue" list based on DateTime and Z Position (part of school project)

    - by Kuchinawa
    For a school project i have a list of 50k containers that arrive on a boat. These containers need to be sorted in a list in such a way that the earliest departure DateTimes are at the top and the containers above those above them. This list then gets used for a crane that picks them up in order. I started out with 2 Collection.sort() methods: 1st one to get them in the right XYZ order Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return positionSort(contData1.getLocation(),contData2.getLocation()); } }); Then another one to reorder the dates while keeping the position in mind: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { int c = contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); int p = positionSort2(contData1.getLocation(), contData2.getLocation()); if(p != 0) c = p; return c; } }); But i never got this method to work.. What i got working now is rather quick and dirty and takes a long time to process (50seconds for all 50k): First a sort on DateTime: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); } }); Then a correction function that bumps top containers up: containers = stackCorrection(containers); private static List<ContainerData> stackCorrection(List<ContainerData> sortedContainerList) { for(int i = 0; i < sortedContainerList.size(); i++) { ContainerData current = sortedContainerList.get(i); // 5 = Max Stack (0 index) if(current.getLocation().getZ() < 5) { //Loop through possible containers above current for(int j = 5; j > current.getLocation().getZ(); --j) { //Search for container above for(int k = i + 1; k < sortedContainerList.size(); ++k) if(sortedContainerList.get(k).getLocation().getX() == current.getLocation().getX()) { if(sortedContainerList.get(k).getLocation().getY() == current.getLocation().getY()) { if(sortedContainerList.get(k).getLocation().getZ() == j) { //Found -> move container above current sortedContainerList.add(i, sortedContainerList.remove(k)); k = sortedContainerList.size(); i++; } } } } } } return sortedContainerList; } I would like to implement this in a better/faster way. So any hints are appreciated. :)

    Read the article

  • Unable to install Xdebug

    - by burnt1ce
    I've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. Can anyone provide any suggestions in getting xdebug to show up? This is what i get when i run phpinfo(). **PHP Version 5.3.1** System Windows NT ANDREW_LAPTOP 5.1 build 2600 (Windows XP Professional Service Pack 3) i586 Build Date Nov 20 2009 17:20:57 Compiler MSVC6 (Visual C++ 6.0) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" Server API Apache 2.0 Handler Virtual Directory Support enabled Configuration File (php.ini) Path no value Loaded Configuration File C:\xampp\php\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) PHP API 20090626 PHP Extension 20090626 Zend Extension 220090626 Zend Extension Build API220090626,TS,VC6 PHP Extension Build API20090626,TS,VC6 Debug Build no Thread Safety enabled Zend Memory Manager enabled Zend Multibyte Support disabled IPv6 Support enabled Registered PHP Streams https, ftps, php, file, glob, data, http, ftp, compress.zlib, compress.bzip2, phar, zip Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls Registered Stream Filters convert.iconv.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, zlib.*, bzip2.* This program makes use of the Zend Scripting Language Engine: Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies PHP Credits Configuration apache2handler Apache Version Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Apache API Version 20051115 Server Administrator postmaster@localhost Hostname:Port localhost:80 Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts Connection: 300 - Keep-Alive: 5 Virtual Server No Server Root C:/xampp/apache Loaded Modules core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_auth_digest mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_cgi mod_dav mod_dav_fs mod_dav_lock mod_dir mod_env mod_headers mod_include mod_info mod_isapi mod_log_config mod_mime mod_negotiation mod_rewrite mod_setenvif mod_ssl mod_status mod_autoindex_color mod_php5 mod_perl mod_apreq2 Directive Local Value Master Value engine 1 1 last_modified 0 0 xbithack 0 0 Apache Environment Variable Value MIBDIRS C:/xampp/php/extras/mibs MYSQL_HOME C:\xampp\mysql\bin OPENSSL_CONF C:/xampp/apache/bin/openssl.cnf PHP_PEAR_SYSCONF_DIR C:\xampp\php PHPRC C:\xampp\php TMP C:\xampp\tmp HTTP_HOST localhost HTTP_CONNECTION keep-alive HTTP_USER_AGENT Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 HTTP_CACHE_CONTROL max-age=0 HTTP_ACCEPT application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_ACCEPT_ENCODING gzip,deflate,sdch HTTP_ACCEPT_LANGUAGE en-US,en;q=0.8 HTTP_ACCEPT_CHARSET ISO-8859-1,utf-8;q=0.7,*;q=0.3 PATH C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin SystemRoot C:\WINDOWS COMSPEC C:\WINDOWS\system32\cmd.exe PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH WINDIR C:\WINDOWS SERVER_SIGNATURE <address>Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Server at localhost Port 80</address> SERVER_SOFTWARE Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 SERVER_NAME localhost SERVER_ADDR 127.0.0.1 SERVER_PORT 80 REMOTE_ADDR 127.0.0.1 DOCUMENT_ROOT C:/xampp/htdocs SERVER_ADMIN postmaster@localhost SCRIPT_FILENAME C:/xampp/htdocs/test.php REMOTE_PORT 3275 GATEWAY_INTERFACE CGI/1.1 SERVER_PROTOCOL HTTP/1.1 REQUEST_METHOD GET QUERY_STRING no value REQUEST_URI /test.php SCRIPT_NAME /test.php HTTP Headers Information HTTP Request Headers HTTP Request GET /test.php HTTP/1.1 Host localhost Connection keep-alive User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 Cache-Control max-age=0 Accept application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding gzip,deflate,sdch Accept-Language en-US,en;q=0.8 Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP Response Headers X-Powered-By PHP/5.3.1 Keep-Alive timeout=5, max=80 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html bcmath BCMath support enabled Directive Local Value Master Value bcmath.scale 0 0 bz2 BZip2 Support Enabled Stream Wrapper support compress.bz2:// Stream Filter support bzip2.decompress, bzip2.compress BZip2 Version 1.0.5, 10-Dec-2007 calendar Calendar support enabled com_dotnet COM support enabled DCOM support disabled .Net support enabled Directive Local Value Master Value com.allow_dcom 0 0 com.autoregister_casesensitive 1 1 com.autoregister_typelib 0 0 com.autoregister_verbose 0 0 com.code_page no value no value com.typelib_file no value no value Core PHP Version 5.3.1 Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen On On allow_url_include Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output &amp; &amp; asp_tags Off Off auto_append_file no value no value auto_globals_jit On On auto_prepend_file no value no value browscap C:\xampp\php\extras\browscap.ini C:\xampp\php\extras\browscap.ini default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors On On doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 22519 22519 exit_on_timeout Off Off expose_php On On extension_dir C:\xampp\php\ext C:\xampp\php\ext file_uploads On On highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .;C:\xampp\php\PEAR .;C:\xampp\php\PEAR log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc Off Off magic_quotes_runtime Off Off magic_quotes_sybase Off Off mail.add_x_header Off Off mail.force_extra_parameters no value no value mail.log no value no value max_execution_time 60 60 max_file_uploads 20 20 max_input_nesting_level 64 64 max_input_time 60 60 memory_limit 128M 128M open_basedir no value no value output_buffering no value no value output_handler no value no value post_max_size 128M 128M precision 14 14 realpath_cache_size 16K 16K realpath_cache_ttl 120 120 register_argc_argv On On register_globals Off Off register_long_arrays Off Off report_memleaks On On report_zend_debug On On request_order no value no value safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path no value no value serialize_precision 100 100 short_open_tag Off Off SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128M 128M upload_tmp_dir C:\xampp\tmp C:\xampp\tmp user_dir no value no value user_ini.cache_ttl 300 300 user_ini.filename .user.ini .user.ini variables_order GPCS GPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance On On zend.enable_gc On On ctype ctype functions enabled date date/time support enabled "Olson" Timezone Database Version 2009.18 Timezone Database internal Default timezone America/New_York Directive Local Value Master Value date.default_latitude 31.7667 31.7667 date.default_longitude 35.2333 35.2333 date.sunrise_zenith 90.583333 90.583333 date.sunset_zenith 90.583333 90.583333 date.timezone America/New_York America/New_York dom DOM/XML enabled DOM/XML API Version 20031129 libxml Version 2.7.6 HTML Support enabled XPath Support enabled XPointer Support enabled Schema Support enabled RelaxNG Support enabled ereg Regex Library System library enabled exif EXIF Support enabled EXIF Version 1.4 $Id: exif.c 287372 2009-08-16 14:32:32Z iliaa $ Supported EXIF Version 0220 Supported filetypes JPEG,TIFF Directive Local Value Master Value exif.decode_jis_intel JIS JIS exif.decode_jis_motorola JIS JIS exif.decode_unicode_intel UCS-2LE UCS-2LE exif.decode_unicode_motorola UCS-2BE UCS-2BE exif.encode_jis no value no value exif.encode_unicode ISO-8859-15 ISO-8859-15 fileinfo fileinfo support enabled version 1.0.5-dev filter Input Validation and Filtering enabled Revision $Revision: 289434 $ Directive Local Value Master Value filter.default unsafe_raw unsafe_raw filter.default_flags no value no value ftp FTP support enabled gd GD Support enabled GD Version bundled (2.0.34 compatible) FreeType Support enabled FreeType Linkage with freetype FreeType Version 2.3.11 T1Lib Support enabled GIF Read Support enabled GIF Create Support enabled JPEG Support enabled libJPEG Version 7 PNG Support enabled libPNG Version 1.2.40 WBMP Support enabled XBM Support enabled JIS-mapped Japanese Font Support enabled Directive Local Value Master Value gd.jpeg_ignore_warning 0 0 gettext GetText Support enabled hash hash support enabled Hashing Engines md2 md4 md5 sha1 sha224 sha256 sha384 sha512 ripemd128 ripemd160 ripemd256 ripemd320 whirlpool tiger128,3 tiger160,3 tiger192,3 tiger128,4 tiger160,4 tiger192,4 snefru snefru256 gost adler32 crc32 crc32b salsa10 salsa20 haval128,3 haval160,3 haval192,3 haval224,3 haval256,3 haval128,4 haval160,4 haval192,4 haval224,4 haval256,4 haval128,5 haval160,5 haval192,5 haval224,5 haval256,5 iconv iconv support enabled iconv implementation "libiconv" iconv library version 1.13 Directive Local Value Master Value iconv.input_encoding ISO-8859-1 ISO-8859-1 iconv.internal_encoding ISO-8859-1 ISO-8859-1 iconv.output_encoding ISO-8859-1 ISO-8859-1 imap IMAP c-Client Version 2007e SSL Support enabled json json support enabled json version 1.2.1 libxml libXML support active libXML Compiled Version 2.7.6 libXML Loaded Version 20706 libXML streams enabled mbstring Multibyte Support enabled Multibyte string engine libmbfl HTTP input encoding translation disabled mbstring extension makes use of "streamable kanji code filter and converter", which is distributed under the GNU Lesser General Public License version 2.1. Multibyte (japanese) regex support enabled Multibyte regex (oniguruma) version 4.7.1 Directive Local Value Master Value mbstring.detect_order no value no value mbstring.encoding_translation Off Off mbstring.func_overload 0 0 mbstring.http_input pass pass mbstring.http_output pass pass mbstring.http_output_conv_mimetypes ^(text/|application/xhtml\+xml) ^(text/|application/xhtml\+xml) mbstring.internal_encoding no value no value mbstring.language neutral neutral mbstring.strict_detection Off Off mbstring.substitute_character no value no value mcrypt mcrypt support enabled Version 2.5.8 Api No 20021217 Supported ciphers cast-128 gost rijndael-128 twofish arcfour cast-256 loki97 rijndael-192 saferplus wake blowfish-compat des rijndael-256 serpent xtea blowfish enigma rc2 tripledes Supported modes cbc cfb ctr ecb ncfb nofb ofb stream Directive Local Value Master Value mcrypt.algorithms_dir no value no value mcrypt.modes_dir no value no value mhash MHASH support Enabled MHASH API Version Emulated Support ming Ming SWF output library enabled Version 0.4.3 mysql MySQL Support enabled Active Persistent Links 0 Active Links 0 Client API version 5.1.41 Directive Local Value Master Value mysql.allow_local_infile On On mysql.allow_persistent On On mysql.connect_timeout 60 60 mysql.default_host no value no value mysql.default_password no value no value mysql.default_port 3306 3306 mysql.default_socket MySQL MySQL mysql.default_user no value no value mysql.max_links Unlimited Unlimited mysql.max_persistent Unlimited Unlimited mysql.trace_mode Off Off mysqli MysqlI Support enabled Client API library version 5.1.41 Active Persistent Links 0 Inactive Persistent Links 0 Active Links 0 Client API header version 5.1.41 MYSQLI_SOCKET MySQL Directive Local Value Master Value mysqli.allow_local_infile On On mysqli.allow_persistent On On mysqli.default_host no value no value mysqli.default_port 3306 3306 mysqli.default_pw no value no value mysqli.default_socket MySQL MySQL mysqli.default_user no value no value mysqli.max_links Unlimited Unlimited mysqli.max_persistent Unlimited Unlimited mysqli.reconnect Off Off mysqlnd mysqlnd enabled Version mysqlnd 5.0.5-dev - 081106 - $Revision: 289630 $ Command buffer size 4096 Read buffer size 32768 Read timeout 31536000 Collecting statistics Yes Collecting memory statistics No Client statistics bytes_sent 0 bytes_received 0 packets_sent 0 packets_received 0 protocol_overhead_in 0 protocol_overhead_out 0 bytes_received_ok_packet 0 bytes_received_eof_packet 0 bytes_received_rset_header_packet 0 bytes_received_rset_field_meta_packet 0 bytes_received_rset_row_packet 0 bytes_received_prepare_response_packet 0 bytes_received_change_user_packet 0 packets_sent_command 0 packets_received_ok 0 packets_received_eof 0 packets_received_rset_header 0 packets_received_rset_field_meta 0 packets_received_rset_row 0 packets_received_prepare_response 0 packets_received_change_user 0 result_set_queries 0 non_result_set_queries 0 no_index_used 0 bad_index_used 0 slow_queries 0 buffered_sets 0 unbuffered_sets 0 ps_buffered_sets 0 ps_unbuffered_sets 0 flushed_normal_sets 0 flushed_ps_sets 0 ps_prepared_never_executed 0 ps_prepared_once_executed 0 rows_fetched_from_server_normal 0 rows_fetched_from_server_ps 0 rows_buffered_from_client_normal 0 rows_buffered_from_client_ps 0 rows_fetched_from_client_normal_buffered 0 rows_fetched_from_client_normal_unbuffered 0 rows_fetched_from_client_ps_buffered 0 rows_fetched_from_client_ps_unbuffered 0 rows_fetched_from_client_ps_cursor 0 rows_skipped_normal 0 rows_skipped_ps 0 copy_on_write_saved 0 copy_on_write_performed 0 command_buffer_too_small 0 connect_success 0 connect_failure 0 connection_reused 0 reconnect 0 pconnect_success 0 active_connections 0 active_persistent_connections 0 explicit_close 0 implicit_close 0 disconnect_close 0 in_middle_of_command_close 0 explicit_free_result 0 implicit_free_result 0 explicit_stmt_close 0 implicit_stmt_close 0 mem_emalloc_count 0 mem_emalloc_ammount 0 mem_ecalloc_count 0 mem_ecalloc_ammount 0 mem_erealloc_count 0 mem_erealloc_ammount 0 mem_efree_count 0 mem_malloc_count 0 mem_malloc_ammount 0 mem_calloc_count 0 mem_calloc_ammount 0 mem_realloc_count 0 mem_realloc_ammount 0 mem_free_count 0 proto_text_fetched_null 0 proto_text_fetched_bit 0 proto_text_fetched_tinyint 0 proto_text_fetched_short 0 proto_text_fetched_int24 0 proto_text_fetched_int 0 proto_text_fetched_bigint 0 proto_text_fetched_decimal 0 proto_text_fetched_float 0 proto_text_fetched_double 0 proto_text_fetched_date 0 proto_text_fetched_year 0 proto_text_fetched_time 0 proto_text_fetched_datetime 0 proto_text_fetched_timestamp 0 proto_text_fetched_string 0 proto_text_fetched_blob 0 proto_text_fetched_enum 0 proto_text_fetched_set 0 proto_text_fetched_geometry 0 proto_text_fetched_other 0 proto_binary_fetched_null 0 proto_binary_fetched_bit 0 proto_binary_fetched_tinyint 0 proto_binary_fetched_short 0 proto_binary_fetched_int24 0 proto_binary_fetched_int 0 proto_binary_fetched_bigint 0 proto_binary_fetched_decimal 0 proto_binary_fetched_float 0 proto_binary_fetched_double 0 proto_binary_fetched_date 0 proto_binary_fetched_year 0 proto_binary_fetched_time 0 proto_binary_fetched_datetime 0 proto_binary_fetched_timestamp 0 proto_binary_fetched_string 0 proto_binary_fetched_blob 0 proto_binary_fetched_enum 0 proto_binary_fetched_set 0 proto_binary_fetched_geometry 0 proto_binary_fetched_other 0 init_command_executed_count 0 init_command_failed_count 0 odbc ODBC Support enabled Active Persistent Links 0 Active Links 0 ODBC library Win32 Directive Local Value Master Value odbc.allow_persistent On On odbc.check_persistent On On odbc.default_cursortype Static cursor Static cursor odbc.default_db no value no value odbc.default_pw no value no value odbc.default_user no value no value odbc.defaultbinmode return as is return as is odbc.defaultlrl return up to 4096 bytes return up to 4096 bytes odbc.max_links Unlimited Unlimited odbc.max_persistent Unlimited Unlimited openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8l 5 Nov 2009 OpenSSL Header Version OpenSSL 0.9.8l 5 Nov 2009 pcre PCRE (Perl Compatible Regular Expressions) Support enabled PCRE Library Version 8.00 2009-10-19 Directive Local Value Master Value pcre.backtrack_limit 100000 100000 pcre.recursion_limit 100000 100000 pdf PDF Support enabled PDFlib GmbH Version 7.0.4p4 PECL Version 2.1.6 Revision $Revision: 277110 $ PDO PDO support enabled PDO drivers mysql, odbc, sqlite, sqlite2 pdo_mysql PDO Driver for MySQL enabled Client API version 5.1.41 PDO_ODBC PDO Driver for ODBC (Win32) enabled ODBC Connection Pooling Enabled, strict matching pdo_sqlite PDO Driver for SQLite 3.x enabled SQLite Library 3.6.20 Phar Phar: PHP Archive support enabled Phar EXT version 2.0.1 Phar API version 1.1.1 CVS revision $Revision: 286338 $ Phar-based phar archives enabled Tar-based phar archives enabled ZIP-based phar archives enabled gzip compression enabled bzip2 compression enabled Native OpenSSL support enabled Phar based on pear/PHP_Archive, original concept by Davey Shafik. Phar fully realized by Gregory Beaver and Marcus Boerger. Portions of tar implementation Copyright (c) 2003-2009 Tim Kientzle. Directive Local Value Master Value phar.cache_list no value no value phar.readonly On On phar.require_hash On On Reflection Reflection enabled Version $Revision: 287991 $ session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx Directive Local Value Master Value session.auto_start Off Off session.bug_compat_42 On On session.bug_compat_warn On On session.cache_expire 180 180 session.cache_limiter nocache nocache session.cookie_domain no value no value session.cookie_httponly Off Off session.cookie_lifetime 0 0 session.cookie_path / / session.cookie_secure Off Off session.entropy_file no value no value session.entropy_length 0 0 session.gc_divisor 100 100 session.gc_maxlifetime 1440 1440 session.gc_probability 1 1 session.hash_bits_per_character 5 5 session.hash_function 0 0 session.name PHPSESSID PHPSESSID session.referer_check no value no value session.save_handler files files session.save_path C:\xampp\tmp C:\xampp\tmp session.serialize_handler php php session.use_cookies On On session.use_only_cookies Off Off session.use_trans_sid 0 0 SimpleXML Simplexml support enabled Revision $Revision: 281953 $ Schema support enabled soap Soap Client enabled Soap Server enabled Directive Local Value Master Value soap.wsdl_cache 1 1 soap.wsdl_cache_dir /tmp /tmp soap.wsdl_cache_enabled 1 1 soap.wsdl_cache_limit 5 5 soap.wsdl_cache_ttl 86400 86400 sockets Sockets Support enabled SPL SPL support enabled Interfaces Countable, OuterIterator, RecursiveIterator, SeekableIterator, SplObserver, SplSubject Classes AppendIterator, ArrayIterator, ArrayObject, BadFunctionCallException, BadMethodCallException, CachingIterator, DirectoryIterator, DomainException, EmptyIterator, FilesystemIterator, FilterIterator, GlobIterator, InfiniteIterator, InvalidArgumentException, IteratorIterator, LengthException, LimitIterator, LogicException, MultipleIterator, NoRewindIterator, OutOfBoundsException, OutOfRangeException, OverflowException, ParentIterator, RangeException, RecursiveArrayIterator, RecursiveCachingIterator, RecursiveDirectoryIterator, RecursiveFilterIterator, RecursiveIteratorIterator, RecursiveRegexIterator, RecursiveTreeIterator, RegexIterator, RuntimeException, SplDoublyLinkedList, SplFileInfo, SplFileObject, SplFixedArray, SplHeap, SplMinHeap, SplMaxHeap, SplObjectStorage, SplPriorityQueue, SplQueue, SplStack, SplTempFileObject, UnderflowException, UnexpectedValueException SQLite SQLite support enabled PECL Module version 2.0-dev $Id: sqlite.c 289598 2009-10-12 22:37:52Z pajoye $ SQLite Library 2.8.17 SQLite Encoding iso8859 Directive Local Value Master Value sqlite.assoc_case 0 0 sqlite3 SQLite3 support enabled SQLite3 module version 0.7-dev SQLite Library 3.6.20 Directive Local Value Master Value sqlite3.extension_dir no value no value standard Dynamic Library Support enabled Internal Sendmail Support for Windows enabled Directive Local Value Master Value assert.active 1 1 assert.bail 0 0 assert.callback no value no value assert.quiet_eval 0 0 assert.warning 1 1 auto_detect_line_endings 0 0 default_socket_timeout 60 60 safe_mode_allowed_env_vars PHP_ PHP_ safe_mode_protected_env_vars LD_LIBRARY_PATH LD_LIBRARY_PATH url_rewriter.tags a=href,area=href,frame=src,input=src,form=,fieldset= a=href,area=href,frame=src,input=src,form=,fieldset= user_agent no value no value tokenizer Tokenizer Support enabled wddx WDDX Support enabled WDDX Session Serializer enabled xml XML Support active XML Namespace Support active libxml2 Version 2.7.6 xmlreader XMLReader enabled xmlrpc core library version xmlrpc-epi v. 0.54 php extension version 0.51 author Dan Libby homepage http://xmlrpc-epi.sourceforge.net open sourced by Epinions.com xmlwriter XMLWriter enabled xsl XSL enabled libxslt Version 1.1.26 libxslt compiled against libxml Version 2.7.6 EXSLT enabled libexslt Version 1.1.26 zip Zip enabled Extension Version $Id: php_zip.c 276389 2009-02-24 23:55:14Z iliaa $ Zip version 1.9.1 Libzip version 0.9.0 zlib ZLib Support enabled Stream Wrapper support compress.zlib:// Stream Filter support zlib.inflate, zlib.deflate Compiled Version 1.2.3 Linked Version 1.2.3 Directive Local Value Master Value zlib.output_compression Off Off zlib.output_compression_level -1 -1 zlib.output_handler no value no value Additional Modules Module Name Environment Variable Value no value ::=::\ no value C:=C:\xampp ALLUSERSPROFILE C:\Documents and Settings\All Users APPDATA C:\Documents and Settings\Andrew\Application Data CHROME_RESTART Google Chrome|Whoa! Google Chrome has crashed. Restart now?|LEFT_TO_RIGHT CHROME_VERSION 5.0.342.8 CLASSPATH .;C:\Program Files\QuickTime\QTSystem\QTJava.zip CommonProgramFiles C:\Program Files\Common Files COMPUTERNAME ANDREW_LAPTOP ComSpec C:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK NO HOMEDRIVE C: HOMEPATH \Documents and Settings\Andrew LOGONSERVER \\ANDREW_LAPTOP NUMBER_OF_PROCESSORS 2 OS Windows_NT PATH C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH PROCESSOR_ARCHITECTURE x86 PROCESSOR_IDENTIFIER x86 Family 6 Model 15 Stepping 10, GenuineIntel PROCESSOR_LEVEL 6 PROCESSOR_REVISION 0f0a ProgramFiles C:\Program Files PROMPT $P$G QTJAVA C:\Program Files\QuickTime\QTSystem\QTJava.zip SESSIONNAME Console sfxcmd "C:\Documents and Settings\Andrew\My Documents\Downloads\xampp-win32-1.7.3.exe" sfxname C:\Documents and Settings\Andrew\My Documents\Downloads\xampp-win32-1.7.3.exe SystemDrive C: SystemRoot C:\WINDOWS TEMP C:\DOCUME~1\Andrew\LOCALS~1\Temp TMP C:\DOCUME~1\Andrew\LOCALS~1\Temp USERDOMAIN ANDREW_LAPTOP USERNAME Andrew USERPROFILE C:\Documents and Settings\Andrew VS100COMNTOOLS C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools\ windir C:\WINDOWS AP_PARENT_PID 2216 PHP Variables Variable Value _SERVER["MIBDIRS"] C:/xampp/php/extras/mibs _SERVER["MYSQL_HOME"] C:\xampp\mysql\bin _SERVER["OPENSSL_CONF"] C:/xampp/apache/bin/openssl.cnf _SERVER["PHP_PEAR_SYSCONF_DIR"] C:\xampp\php _SERVER["PHPRC"] C:\xampp\php _SERVER["TMP"] C:\xampp\tmp _SERVER["HTTP_HOST"] localhost _SERVER["HTTP_CONNECTION"] keep-alive _SERVER["HTTP_USER_AGENT"] Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 _SERVER["HTTP_CACHE_CONTROL"] max-age=0 _SERVER["HTTP_ACCEPT"] application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 _SERVER["HTTP_ACCEPT_ENCODING"] gzip,deflate,sdch _SERVER["HTTP_ACCEPT_LANGUAGE"] en-US,en;q=0.8 _SERVER["HTTP_ACCEPT_CHARSET"] ISO-8859-1,utf-8;q=0.7,*;q=0.3 _SERVER["PATH"] C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin _SERVER["SystemRoot"] C:\WINDOWS _SERVER["COMSPEC"] C:\WINDOWS\system32\cmd.exe _SERVER["PATHEXT"] .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH _SERVER["WINDIR"] C:\WINDOWS _SERVER["SERVER_SIGNATURE"] <address>Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Server at localhost Port 80</address> _SERVER["SERVER_SOFTWARE"] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 _SERVER["SERVER_NAME"] localhost _SERVER["SERVER_ADDR"] 127.0.0.1 _SERVER["SERVER_PORT"] 80 _SERVER["REMOTE_ADDR"] 127.0.0.1 _SERVER["DOCUMENT_ROOT"] C:/xampp/htdocs _SERVER["SERVER_ADMIN"] postmaster@localhost _SERVER["SCRIPT_FILENAME"] C:/xampp/htdocs/test.php _SERVER["REMOTE_PORT"] 3275 _SERVER["GATEWAY_INTERFACE"] CGI/1.1 _SERVER["SERVER_PROTOCOL"] HTTP/1.1 _SERVER["REQUEST_METHOD"] GET _SERVER["QUERY_STRING"] no value _SERVER["REQUEST_URI"] /test.php _SERVER["SCRIPT_NAME"] /test.php _SERVER["PHP_SELF"] /test.php _SERVER["REQUEST_TIME"] 1270600868 _SERVER["argv"] Array ( ) _SERVER["argc"] 0 PHP License This program is free software; you can redistribute it and/or modify it under the terms of the PHP License as published by the PHP Group and included in the distribution in the file: LICENSE This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. If you did not receive a copy of the PHP license, or have any questions about PHP licensing, please contact [email protected].

    Read the article

  • Xobni Free Powers Up Outlook’s Search and Contacts

    - by Matthew Guay
    Want to find out more about your contacts, discover email trends, and even sync Yahoo! email accounts in Outlook?  Here’s how you can do this and more with Xobni Free. Email is one of the most important communications mediums today, but even with all of the advances in Outlook over the years it can still be difficult to keep track of conversations, files, and contacts.  Xobni makes it easy by indexing your emails and organizing them by sender.  You can use its powerful search to quickly find any email, find related messages, and then view more information about that contact with information from social networks.  And, to top it off, it even lets you view your Yahoo! emails directly in Outlook without upgrading to a Yahoo! Plus account.  Xobni runs in Outlook 2003, 2007, and 2010, including the 64 bit version of Outlook 2010, and users of older versions will especially enjoy the new features Xobni brings for free. Getting started Download the Xobni Free installer (link below), and run to start the installation.  Make sure to exit Outlook before installing.  Xobni may need to download additional files which may take a few moments. When the download is finished, proceed with the install as normal.  You can opt out of the Product Improvement Program at the end of the installation by unchecking the box.  Additionally, you are asked to share Xobni with your friends on social networks, but this is not required.   Next time you open Outlook, you’ll notice the new Xobni sidebar in Outlook.  You can choose to watch an introduction video that will help you quickly get up to speed on how Xobni works. While this is playing, Xobni is working at indexing your email in the background.  Once the first indexing is finished, click Let’s Go! to start using Xobni. Here’s how Xobni looks in Outlook 2010: Advanced Email Information Select an email, and now you can see lots of info about it in your new Xobni sidebar.   On the top of the sidebar, select the graph icon to see when and how often you email with a contact.  Each contact is given an Xobni rank so you can quickly see who you email the most.   You can see all related emails sorted into conversations, and also all attachments in the conversation, not just this email. Xobni can also show you all scheduled appointments and links exchanged with a contact, but this is only available in the Plus version.  If you’d rather not see the tab for a feature you can’t use, click Don’t show this tab to banish it from Xobni for good.   Searching emails from the Xobni toolbar is very fast, and you can preview a message by simply hovering over it from the search pane. Get More Information About Your Contacts Xobni’s coolest feature is its social integration.  Whenever you select an email, you may see a brief bio, picture, and more, all pulled from social networks.   Select one of the tabs to find more information.  You may need to login to view information on your contacts from certain networks. The Twitter tab lets you see recent tweets.  Xobni will search for related Twitter accounts, and will ask you to confirm if the choice is correct.   Now you can see this contact’s recent Tweets directly from Outlook.   The Hoovers tab can give you interesting information about the businesses you’re in contact with. If the information isn’t correct, you can edit it and add your own information.  Click the Edit button, and the add any information you want.   You can also remove a network you don’t wish to see.  Right-click on the network tabs, select Manage Extensions, and uncheck any you don’t want to see. But sometimes online contact just doesn’t cut it.  For these times, click on the orange folder button to request a contact’s phone number or schedule a time with them. This will open a new email message ready to send with the information you want.  Edit as you please, and send. Add Yahoo! Email to Outlook for Free One of Xobni’s neatest features is that it let’s you add your Yahoo! email account to Outlook for free.  Click the gear icon in the bottom of the Xobni sidebar and select Options to set it up. Select the Integration tab, and click Enable to add Yahoo! mail to Xobni. Sign in with your Yahoo! account, and make sure to check the Keep me signed in box. Note that you may have to re-signin every two weeks to keep your Yahoo! account connected.  Select I agree to finish setting it up. Xobni will now download and index your recent Yahoo! mail. Your Yahoo! messages will only show up in the Xobni sidebar.  Whenever you select a contact, you will see related messages from your Yahoo! account as well.  Or, you can search from the sidebar to find individual messages from your Yahoo! account.  Note the Y! logo beside Yahoo! messages.   Select a message to read it in the Sidebar.  You can open the email in Yahoo! in your browser, or can reply to it using your default Outlook email account. If you have many older messages in your Yahoo! account, make sure to go back to the Integration tab and select Index Yahoo! Mail to index all of your emails. Conclusion Xobni is a great tool to help you get more out of your daily Outlook experience.  Whether you struggle to find attachments a coworker sent you or want to access Yahoo! email from Outlook, Xobni might be the perfect tool for you.  And with the extra things you learn about your contacts with the social network integration, you might boost your own PR skills without even trying! Link Download Xobni Similar Articles Productive Geek Tips Speed up Windows Vista Start Menu Search By Limiting ResultsFix for New Contact Group Button Not Displaying in VistaGet Maps and Directions to Your Contacts in Outlook 2007Backup Windows Mail Messages and Contacts in VistaHow to Import Gmail Contacts Into Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • SSH new connection begins to hang (not reject or terminate) after a day or so on Ubuntu 13.04 server

    - by kross
    Recently we upgraded the server from 12.04 LTS server to 13.04. All was well, including after a reboot. With all packages updated we began to see a strange issue, ssh works for a day or so (unclear on timing) then a later request for SSH hangs (cannot ctrl+c, nothing). It is up and serving webserver traffic etc. Port 22 is open (ips etc altered slightly for posting): nmap -T4 -A x.acme.com Starting Nmap 6.40 ( http://nmap.org ) at 2013-09-12 16:01 CDT Nmap scan report for x.acme.com (69.137.56.18) Host is up (0.026s latency). rDNS record for 69.137.56.18: c-69-137-56-18.hsd1.tn.provider.net Not shown: 998 filtered ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 6.1p1 Debian 4 (protocol 2.0) | ssh-hostkey: 1024 54:d3:e3:38:44:f4:20:a4:e7:42:49:d0:a7:f1:3e:21 (DSA) | 2048 dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 (RSA) |_256 45:69:10:79:5a:9f:0b:f0:66:15:39:87:b9:a1:37:f7 (ECDSA) 80/tcp open http Jetty 7.6.2.v20120308 | http-title: Log in as a Bamboo user - Atlassian Bamboo |_Requested resource was http://x.acme.com/userlogin!default.action;jsessionid=19v135zn8cl1tgso28fse4d50?os_destination=%2Fstart.action Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 12.89 seconds Here is the ssh -vvv: ssh -vvv x.acme.com OpenSSH_5.9p1, OpenSSL 0.9.8x 10 May 2012 debug1: Reading configuration data /Users/tfergeson/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to x.acme.com [69.137.56.18] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/Users/tfergeson/.ssh/id_rsa" as a RSA1 public key debug1: identity file /Users/tfergeson/.ssh/id_rsa type 1 debug1: identity file /Users/tfergeson/.ssh/id_rsa-cert type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa type -1 debug1: identity file /Users/tfergeson/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 130/256 debug2: bits set: 503/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA dc:21:77:3b:f4:4e:74:d0:87:33:14:40:04:68:33:a6 debug3: load_hostkeys: loading entries for host "x.acme.com" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:10 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "69.137.56.18" from file "/Users/tfergeson/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /Users/tfergeson/.ssh/known_hosts:6 debug3: load_hostkeys: loaded 1 keys debug1: Host 'x.acme.com' is known and matches the RSA host key. debug1: Found key in /Users/tfergeson/.ssh/known_hosts:10 debug2: bits set: 493/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/tfergeson/.ssh/id_rsa (0x7ff189c1d7d0) debug2: key: /Users/tfergeson/.ssh/id_dsa (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/tfergeson/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug3: sign_and_send_pubkey: RSA 3c:e5:29:6c:9d:27:d1:7d:e8:09:a2:e8:8e:6e:af:6f debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to x.acme.com ([69.137.56.18]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: fd 3 setting TCP_NODELAY debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ATLAS_OPTS debug3: Ignored env rvm_bin_path debug3: Ignored env TERM_PROGRAM debug3: Ignored env GEM_HOME debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env CLICOLOR debug3: Ignored env IRBRC debug3: Ignored env TMPDIR debug3: Ignored env Apple_PubSub_Socket_Render debug3: Ignored env TERM_PROGRAM_VERSION debug3: Ignored env MY_RUBY_HOME debug3: Ignored env TERM_SESSION_ID debug3: Ignored env USER debug3: Ignored env COMMAND_MODE debug3: Ignored env rvm_path debug3: Ignored env COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/tfergeson/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET debug3: Ignored env JPDA_ADDRESS debug3: Ignored env APDK_HOME debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env Apple_Ubiquity_Message debug3: Ignored env __CF_USER_TEXT_ENCODING debug3: Ignored env rvm_sticky_flag debug3: Ignored env MAVEN_OPTS debug3: Ignored env LSCOLORS debug3: Ignored env rvm_prefix debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env JAVA_HOME debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env JPDA_TRANSPORT debug3: Ignored env rvm_version debug3: Ignored env M2_HOME debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env rvm_ruby_string debug3: Ignored env LOGNAME debug3: Ignored env M2_REPO debug3: Ignored env GEM_PATH debug3: Ignored env AWS_RDS_HOME debug3: Ignored env rvm_delete_flag debug3: Ignored env EC2_PRIVATE_KEY debug3: Ignored env RUBY_VERSION debug3: Ignored env SECURITYSESSIONID debug3: Ignored env EC2_CERT debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 I can hard reboot (only mac monitors at that location) and it will again be accessible. This now happens every single time. It is imperative that I get it sorted. The strange thing is that it behaves initially then starts to hang after several hours. I perused logs previously and nothing stood out. From the auth.log, I can see that it has allowed me in, but still I get nothing back on the client side: Sep 20 12:47:50 cbear sshd[25376]: Accepted publickey for tfergeson from 10.1.10.14 port 54631 ssh2 Sep 20 12:47:50 cbear sshd[25376]: pam_unix(sshd:session): session opened for user tfergeson by (uid=0) UPDATES: Still occurring even after setting UseDNS no and commenting out #session optional pam_mail.so standard noenv This does not appear to be a network/dns related issue, as all services running on the machine are as responsive and accessible as ever, with the exception of sshd. Any thoughts on where to start?

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Go From Social Glum to Guru at the Social Media Rally Station @ OOW

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} @OPN Partners,We have some #exciting news for you! Just when you thought Oracle OpenWorld #OOW couldn’t get any better; OPN wants to announce a little something called the Social Media Rally Station™. #OMG!Enough with the social talk, hash tags and @’s, since there will be plenty of that at Oracle OpenWorld! This awesome station full of experts is the opportunity you've been looking for to optimize your online presence. You’ll start by receiving an overall evaluation of where you stand online, and get customized, face-to-face, expert advice on how to better engage with your customers and find new prospects online! Here’s what will happen at the Social Media Rally Stations: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Partners will check in with a Rally Coordinator who will assess your needs and move you to the appropriate station. You will take part in a Professional Photo Station where you’ll get a head shot to use on social profiles, your own website, or for articles and posts about your company. Finally, the One-2-One Station Consultants will walk you through how you’re using social media today and next steps including, Google Alerts, Google Analytics, Search Engine Optimization, LinkedIn, Twitter, Facebook, Google+ and more. Finally, this is a custom engagement so you can decide how you want to focus the time. Go from Social Media glum to guru in under 25 minutes! Oh and a few other things to remember… Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} These Social Media Rally Stations will be taking place on: Sunday, 9/30 from 3-5 p.m.PT at the Esplanade level, Moscone South and Monday, 10/1 from 10 a.m. – 6 p.m. PT at the OPN Lounge in Moscone South, Exhibit Hall Level Please wear professional attire from the waist up for your head-shot Bring any login info for your social platforms Come prepared with questions for our One-2-One Consultants! If you have any questions before the hitting the ground running at the Social Media Station™ sponsored by Oracle and provided by Channel Maven Consulting, or if you’d like to schedule some time while you’re at Oracle OpenWorld, send an email to [email protected]. Oh and don’t forget to RT this post on Twitter and ‘like’ us on Facebook to spread the word! #Thanks!See you around the social-sphere,#OPN

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >