Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 267/963 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • How to delete/edit files from readonly filesystem

    - by Santosh Linkha
    I am having problem with my memory device (actually a memory card that act external memory device like pendrive). experimentx@workmateX:/var/www/zendtest$ sudo rm /media/A88F-8788/python-2.7.1-docs-html.zip rm: cannot remove `/media/A88F-8788/python-2.7.1-docs-html.zip': Read-only file system I tried to change the file permission of the system but that doesn't work experimentx@workmateX:/var/www/zendtest$ sudo chmod 0777 /media/A88F-8788/python-2.7.1-docs-html.zip chmod: changing permissions of `/media/A88F-8788/python-2.7.1-docs-html.zip': Read-only file system But it perfectly works on windows. UPDATE On opening the drive and running command sudo mount -o remount,rw /media/A88F-8788 /var/log/syslog: Mar 23 15:29:48 workmateX kernel: [18042.257407] fat_get_cluster: 11 callbacks suppressed Mar 23 15:29:48 workmateX kernel: [18042.257414] FAT: Filesystem error (dev sdb1) Mar 23 15:29:48 workmateX kernel: [18042.257418] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:29:48 workmateX kernel: [18042.257425] FAT: Filesystem has been set read-only Mar 23 15:29:48 workmateX kernel: [18042.258187] FAT: Filesystem error (dev sdb1) Mar 23 15:29:48 workmateX kernel: [18042.258194] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.333787] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.333795] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.335949] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.335957] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.354903] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.354911] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.357213] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.357221] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.359547] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.359555] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.361929] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.361936] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.377416] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.377424] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.379384] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.379392] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.381898] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.381906] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:35 workmateX kernel: [18149.383764] FAT: Filesystem error (dev sdb1) Mar 23 15:31:35 workmateX kernel: [18149.383772] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.569747] fat_get_cluster: 11 callbacks suppressed Mar 23 15:31:40 workmateX kernel: [18154.569754] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.569758] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.569765] FAT: Filesystem has been set read-only Mar 23 15:31:40 workmateX kernel: [18154.572022] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.572029] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.582933] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.582941] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.585921] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.585929] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.587819] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.587827] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.597547] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.597555] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.599503] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.599511] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.602896] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.602905] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.615338] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.615346] fat_get_cluster: invalid cluster chain (i_pos 0) Mar 23 15:31:40 workmateX kernel: [18154.618574] FAT: Filesystem error (dev sdb1) Mar 23 15:31:40 workmateX kernel: [18154.618581] fat_get_cluster: invalid cluster chain (i_pos 0) var/log/message: Mar 23 15:29:48 workmateX kernel: [18042.257407] fat_get_cluster: 11 callbacks suppressed Mar 23 15:31:40 workmateX kernel: [18154.569747] fat_get_cluster: 11 callbacks suppressed

    Read the article

  • Crash when trying to get NSManagedObject from NSFetchedResultsController after 25 objects?

    - by Jeremy
    Hey everyone, I'm relatively new to Core Data on iOS, but I think I've been getting better with it. I've been experiencing a bizarre crash, however, in one of my applications and have not been able to figure it out. I have approximately 40 objects in Core Data, presented in a UITableView. When tapping on a cell, a UIActionSheet appears, presenting the user with a UIActionSheet with options related to the cell that was selected. So that I can reference the selected object, I declare an NSIndexPath in my header called "lastSelection" and do the following when the UIActionSheet is presented: // Each cell has a tag based on its row number (i.e. first row has tag 0) lastSelection = [NSIndexPath indexPathForRow:[sender tag] inSection:0]; NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; BOOL onDuty = [[managedObject valueForKey:@"onDuty"] boolValue]; UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@"Status" delegate:self cancelButtonTitle:nil destructiveButtonTitle:nil otherButtonTitles:nil]; if(onDuty) { [actionSheet addButtonWithTitle:@"Off Duty"]; } else { [actionSheet addButtonWithTitle:@"On Duty"]; } actionSheet.actionSheetStyle = UIActionSheetStyleBlackOpaque; // Override the typical UIActionSheet behavior by presenting it overlapping the sender's frame. This makes it more clear which cell is selected. CGRect senderFrame = [sender frame]; CGPoint point = CGPointMake(senderFrame.origin.x + (senderFrame.size.width / 2), senderFrame.origin.y + (senderFrame.size.height / 2)); CGRect popoverRect = CGRectMake(point.x, point.y, 1, 1); [actionSheet showFromRect:popoverRect inView:[sender superview] animated:NO]; [actionSheet release]; When the UIActionSheet is dismissed with a button, the following code is called: - (void)actionSheet:(UIActionSheet *)actionSheet willDismissWithButtonIndex:(NSInteger)buttonIndex { // Set status based on UIActionSheet button pressed if(buttonIndex == -1) { return; } NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; if([actionSheet.title isEqualToString:@"Status"]) { if([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:@"On Duty"]) { [managedObject setValue:[NSNumber numberWithBool:YES] forKey:@"onDuty"]; [managedObject setValue:@"onDuty" forKey:@"status"]; } else { [managedObject setValue:[NSNumber numberWithBool:NO] forKey:@"onDuty"]; [managedObject setValue:@"offDuty" forKey:@"status"]; } } NSError *error; [self.managedObjectContext save:&error]; [tableView reloadData]; } This might not be the most efficient code (sorry, I'm new!), but it does work. That is, for the first 25 items in the list. Selecting the 26th item or beyond, the UIActionSheet will appear, but if it is dismissed with a button, I get a variety of errors, including any one of the following: [__NSCFArray section]: unrecognized selector sent to instance 0x4c6bf90 Program received signal: “EXC_BAD_ACCESS” [_NSObjectID_48_0 section]: unrecognized selector sent to instance 0x4c54710 [__NSArrayM section]: unrecognized selector sent to instance 0x4c619a0 [NSComparisonPredicate section]: unrecognized selector sent to instance 0x6088790 [NSKeyPathExpression section]: unrecognized selector sent to instance 0x4c18950 If I comment out NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:lastSelection]; it doesn't crash anymore, so I believe it has something do do with that. Can anyone offer any insight? Please let me know if I need to include any other information. Thanks! EDIT: Interestingly, my fetchedResultsController code returns a different object every time. Is this expected, or could this be a cause of my issue? The code looks like this: - (NSFetchedResultsController *)fetchedResultsController { /* Set up the fetched results controller. */ // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Employee" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; // Set the batch size to a suitable number. [fetchRequest setFetchBatchSize:80]; // Edit the sort key as appropriate. NSString *sortKey; BOOL ascending; if(sortControl.selectedSegmentIndex == 0) { sortKey = @"startTime"; ascending = YES; } else if(sortControl.selectedSegmentIndex == 1) { sortKey = @"name"; ascending = YES; } else { sortKey = @"onDuty"; ascending = NO; } NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:sortKey ascending:ascending]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Edit the section name key path and cache name if appropriate. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:self.managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; if (![fetchedResultsController_ performFetch:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ //NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return fetchedResultsController_; } This happens when I set a breakpoint: (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x61567c0> (gdb) po [self fetchedResultsController] <NSFetchedResultsController: 0x4c83630>

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • How to install Huawei Mobile broadband EC306?

    - by serviteur
    How to install Huawei Mobile Broadband EC 306 EVDO RevB in Ubuntu 12.04 LTS 64bit ? Best Regards Excuses me for my bad english When I connect the modem on ubuntu, it fails to mount system and furthermore it is not recognized as a CD-ROM. I is not installed Windows on my computer, but I try to open the modem under Windows on a PC friend, There is no script file called "Linux", but only Windows. lsusb : serviteur@creation:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 001 Device 007: ID 12d1:1506 Huawei Technologies Co., Ltd. E398 LTE/UMTS/GSM Modem/Networkcard dmesg Q: 0 ANSI: 2 [16619.060771] sr1: scsi-1 drive [16619.060955] sr 13:0:0:0: Attached scsi CD-ROM sr1 [16619.061099] sr 13:0:0:0: Attached scsi generic sg3 type 5 [16619.061358] sd 14:0:0:0: Attached scsi generic sg4 type 0 [16619.063654] sd 14:0:0:0: [sdc] Attached SCSI removable disk [16634.224923] usb 1-6: USB disconnect, device number 6 [16638.468041] usb 1-6: new high-speed USB device number 7 using ehci_hcd [16638.586210] option 1-6:1.0: GSM modem (1-port) converter detected [16638.586316] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB0 [16638.586435] option 1-6:1.1: GSM modem (1-port) converter detected [16638.586517] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB1 [16638.586607] option 1-6:1.2: GSM modem (1-port) converter detected [16638.586676] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB2 [16638.586752] option 1-6:1.3: GSM modem (1-port) converter detected [16638.586828] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB3 [16638.586929] option 1-6:1.4: GSM modem (1-port) converter detected [16638.586997] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB4 [16638.587114] option 1-6:1.5: GSM modem (1-port) converter detected [16638.587187] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB5 [16638.646686] option1 ttyUSB5: GSM modem (1-port) converter now disconnected from ttyUSB5 [16638.646706] option 1-6:1.5: device disconnected [16638.660755] scsi15 : usb-storage 1-6:1.5 [16638.663284] option1 ttyUSB4: GSM modem (1-port) converter now disconnected from ttyUSB4 [16638.663301] option 1-6:1.4: device disconnected [16638.689043] scsi16 : usb-storage 1-6:1.4

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • HttpModule Init method is called several times - why?

    - by MartinF
    I was creating a http module and while debugging I noticed something which at first (at least) seemed like weird behaviour. When i set a breakpoint in the init method of the httpmodule i can see that the http module init method is being called several times even though i have only started up the website for debugging and made one single request (sometimes it is hit only 1 time, other times as many as 10 times). I know that I should expect several instances of the HttpApplication to be running and for each the http modules will be created, but when i request a single page it should be handled by a single http application object and therefore only fire the events associated once, but still it fires the events several times for each request which makes no sense - other than it must have been added several times within that httpApplication - which means it is the same httpmodule init method which is being called every time and not a new http application being created each time it hits my break point (see my code example at the bottom etc.). What could be going wrong here ? is it because i am debugging and set a breakpoint in the http module? It have noticed that it seems that if i startup the website for debugging and quickly step over the breakpoint in the httpmodule it will only hit the init method once and the same goes for the eventhandler. If I instead let it hang at the breakpoint for a few seconds the init method is being called several times (seems like it depends on how long time i wait before stepping over the breakpoint). Maybe this could be some build in feature to make sure that the httpmodule is initialised and the http application can serve requests , but it also seems like something that could have catastrophic consequences. This could seem logical, as it might be trying to finish the request and since i have set the break point it thinks something have gone wrong and try to call the init method again ? soo it can handle the request ? But is this what is happening and is everything fine (i am just guessing), or is it a real problem ? What i am specially concerned about is that if something makes it hang on the "production/live" server for a few seconds a lot of event handlers are added through the init and therefore each request to the page suddenly fires the eventhandler several times. This behaviour could quickly bring any site down. I have looked at the "original" .net code used for the httpmodules for formsauthentication and the rolemanagermodule etc but my code isnt any different that those modules uses. My code looks like this. public void Init(HttpApplication app) { if (CommunityAuthenticationIntegration.IsEnabled) { FormsAuthenticationModule formsAuthModule = (FormsAuthenticationModule) app.Modules["FormsAuthentication"]; formsAuthModule.Authenticate += new FormsAuthenticationEventHandler(this.OnAuthenticate); } } here is an example how it is done in the RoleManagerModule from the .NET framework public void Init(HttpApplication app) { if (Roles.Enabled) { app.PostAuthenticateRequest += new EventHandler(this.OnEnter); app.EndRequest += new EventHandler(this.OnLeave); } } Do anyone know what is going on? (i just hope someone out there can tell me why this is happening and assure me that everything is perfectly fine) :) UPDATE: I have tried to narrow down the problem and so far i have found that the Init method being called is always on a new object of my http module (contary to what i thought before). I seems that for the first request (when starting up the site) all of the HttpApplication objects being created and its modules are all trying to serve the first request and therefore all hit the eventhandler that is being added. I cant really figure out why this is happening. If i request another page all the HttpApplication's created (and their moduless) will again try to serve the request causing it to hit the eventhandler multiple times. But it also seems that if i then jump back to the first page (or another one) only one HttpApplication will start to take care of the request and everything is as expected - as long as i dont let it hang at a break point. If i let it hang at a breakpoint it begins to create new HttpApplication's objects and starts adding HttpApplications (more than 1) to serve/handle the request (which is already in process of being served by the HttpApplication which is currently stopped at the breakpoint). I guess or hope that it might be some intelligent "behind the scenes" way of helping to distribute and handle load and / or errors. But I have no clue. I hope some out there can assure me that it is perfectly fine and how it is supposed to be?

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Ubuntu 11.10 running in windows 7 (wubi) AND on a separate partition

    - by Pareen
    I am in a very strange situation and need some help: I installed Ubuntu 11.10 through Wubi a while back so that I can use it alongside Windows 7. I was running out of space on my disk when trying to install applications. Without understanding how Wubi worked, I partitioned my C drive (creating a new 90 GB partition) in Windows, booted from the Ubuntu 11.10 install/live disk, and used the "something else" option to create a ext4 (setting the mount point to root) and swap space partitions (/sda5 and /sda6). After the install, my computer no longer boots with the previous Wubi menu and is now using the Linux grub. The options I have are /sda2, which boots Windows 7; /sda1, which doesn't do anything and reloads the same menu, and the run Linux options. So I now have Ubuntu running on a separate partition, as well as the original Wubi install. I want to delete the seperate partition and go back to running Ubuntu on Wubi...if I remove the partition will I need the Windows 7 disk to restore the boot loader? I dont have the Windows 7 disk on me so what is the best way to clean this up so I get rid of the seperate partition? -------------------------------------------------UPDATE----------------------------------------------------- ============================================================================================================ thank you so much for your response. Actually, it would be fantastic if I could migrate my Wubi install into the new partition because I had downloaded the AOSP on the Wubi install (as well as other files) and would love to preserve them. If i can do that and work on the new partition with the old files than that would be great, and I can worry about wiping out the partition completely later on i.e. when I have the windows disk or something. Can you tell me how to do this migration?? So when I select the /sda2, it loads up my Windows. If i click on the Linux, it loads up the newly install Linux (my files that were on the Wubi install aren't there) fine. If I click on the /sda1 (SYSTEM_DRIVE... this is what the Wubi was using to boot the menu that let me select Windows 7 or Ubuntu)... it fails and just reloads the original menu. Here is the link to my boot info script http://pastebin.com/dMrY0NL3

    Read the article

  • rkhunter 1.4 different results than version before?

    - by dschinn1001
    with rkhunter version before ubuntu-update from 12.04 to 12.10 I had NOT these warnings like listed here: Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ Warning ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/tcpd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/curl [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ Warning ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ Warning ] /usr/bin/locate [ Warning ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ Warning ] /usr/bin/lynx [ Warning ] /usr/bin/mail [ Warning ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ Warning ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ Warning ] /usr/bin/rkhunter [ Warning ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ Warning ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/gawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ Warning ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ Warning ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ Warning ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ Warning ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ Warning ] /bin/dash [ Warning ] It seems that rkhunter 1.4 is oversensitive somehow about changed bin-files ? chkrootkit finds nothing and no warnings too.

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • Crash Report in Ubuntu... hardware problem?

    - by Andrew
    Got this on my machine. I was just browsing the web on Chrome and my computer froze. I recently just built this machine. I have a feeling it is a hardware problem... Possibly one of my parts arrived broken in some way.... Starting anac(h)ronistic cron Stopping anac(h)ronistic cron Stopping cold plug devices Stopping log initial device creation Starting enable remaining boot-time encrypted block devices Starting configure network device security Starting configure virtual network devices Starting save udev log and update rules Stopping configure virtual network devices Stopping save udev log and update rules Checking battery state... Stopping System V runlevel compatibility Stopping enable remaining boot-time encrypted block devices Stopping Mount filesystems on boot 91.573384] BUG: unable to handle kernel NULL pointer dereference at (null) 91.573437] IP: [<ffffffff81313514>] strcmp+0x14/0x30 91.573470] PGD 1f7822067 PUD 1ed7a6067 PMD 0 91.573498] Oops: 0000 [#1] SMP 91.573519] CPU 3 91.573531] Modules linked in: dm_crypt bnep snd_hda_codec_realtek rfcomm bluetooth parport_pc ppdev arc4 fglrx(P) rt2800usb rt2800lib crc_ccitt rt2x00usb rt2x00lib mac0021 cfg80211 psmouse snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer send_seq_device snd joydev mac_hid mei(C) soundcore serio_raw snd_page_alloc lp parport ses enclosure usbhid hid i915 drm_kms_helper drm i2c_algo_bit mxm_umi tg_video wmi usb_storage 91.573826] 91.573837] Pid: 2297, comm: update-notifier Tainted: P C O 3.2.0-29-generic #46-Ubuntu To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Extreme4 91.573912] RIP: 0010:[<ffffffff81313514>] [<ffffffff81313514>] strcmp+0x14/0x30 91.573954] RSP: 0018:ffff8801f83f5bb8 EFLAGS: 00010246 91.573982] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 91.574019] RDX: 0000000000000069 RSI: 0000000000000000 RDI: ffff88021adb26f8 91.574056] RBP: ffff8801f83f5bb8 R08: ffff88022f2d6e80 R09: 0000000000000000 91.574093] R10: ffff88021e7dbf00 R11: 0000000000000003 R12: ffff88021c10eb40 91.574130] R13: 0000000000000000 R14: ffff88021adb26f8 R15: ffff8801f83f5d40 91.574168] FS: 00007f958cf53940(0000) GS:ffff88022f2c0000(0000) kn1GS:0000000000000000 91.574210] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 91.574240] CR2: 0000000000000000 CR3: 000000021f6d7000 CR4: 00000000000406e0 91.574277] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 91.574314] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000 91.574351] Process update-notifier (pid: 2297, threadinfo ffff801f83f4000, task ffff880208fe2e00) 91.574397] Stack: 91.574409] ffff8801f83f5be8 ffffffff811ed509 ffff88021adb26c0 ffff88021b8b7020 91.574453] ffff88021b461c60 fffffffffffffffe ffff8801f83f5c18 ffffffff811ed61f 91.574496] ffff88021adb26c0 ffff88021b8b7020 ffff8801f83f5dc8 0000000000000001 91.574539] Call Trace: 91.574558] [<ffffffff811ed509] sysfs_find_dirent+0x59/0x110 91.574591] [<ffffffff811ed61f] sysfs_lookup+0x5f/0x110 91.574621] [<ffffffff81182745] d_alloc_and_lookup+0x45/0x90 91.574654] [<ffffffff8118fe65] ? d_lookup+0x35/0x60 91.574683] [<ffffffff811848d2] do_lookup+0x202/0x310 91.574712] [<ffffffff8118660c] path_lookupat+0x11c/0x750 91.574744] [<ffffffff81318db7] ? __strncpy_from_user+0x27/0x60 91.574778] [<ffffffff81186c71] do_path_lookup+0x31/0xc0 91.574809] [<ffffffff81187779] user_path_at_empty+0x59/0xa0 91.574842] [<ffffffff81187822] ? do_filp_open+0x42/0xa0 91.574872] [<ffffffff811877d1] user_path_at+0x11/0x20 91.574902] [<ffffffff8117c80a] vfs_fstatat+0x3a/0x70 91.574933] [<ffffffff81161cff] ? kmem_cache_free+0x2f/0x110 91.574965] [<ffffffff8117c85e] vfs_lstat+-x31/0x70 91.574993] [<ffffffff8117c9fa] sys_newlstat+0x1a/0x40 91.575022] [<ffffffff81176ee1] ? do_sys_open+0x171/0x220 91.575053] [<ffffffff8117cb1a] ? sys_readlinkat+0x7a/0xb0 91.575086] [<ffffffff81661ec2] system_call_fastpath+0x16/0x1b 91.575118] Code: 83 c1 01 40 84 ff 75 ef 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 00 55 31 c0 48 89 e5 66 2e 0f 1f 84 00 00 00 00 00 0f b6 14 07 <3a> 14 06 75 0f 48 83 c0 01 84 d2 75 ef 31 c0 5d c3 0f 1f 00 19 91.577243] RIP [<ffffffff81313514>] strcmp+0x14/0x30 91.579314] RSP <ffff8801f83f5bb8> 91.581385] CR2: 0000000000000000

    Read the article

  • Using RJS to replace innerHTML with a real live instance variable.

    - by Steve Cotner
    I can't for the life of me get RJS to replace an element's innerHTML with an instance variable's attribute, i.e. something like @thing.name I'll show all the code (simplified from the actual project, but still complete), and I hope the solution will be forehead-slap obvious to someone... In RoR, I've made a simple page displaying a random Chinese character. This is a Word object with attributes chinese and english. Clicking on a link titled "What is this?" reveals the english attribute using RJS. Currently, it also hides the "What is this?" link and reveals a "Try Another?" link that just reloads the page, effectively starting over with a new random character. This is fine, but there are other elements on the page that make their own database queries, so I would like to load a new random character by an AJAX call, leaving the rest of the page alone. This has turned out to be harder than I expected: I have no trouble replacing the html using link_remote_to and page.replace_html, but I can't get it to display anything that includes an instance variable. I have a Word resource and a Page resource, which has a home page, where all this fun takes place. In the PagesController, I've made a couple ways to get random words. Either one works fine... Here's the code: class PagesController < ApplicationController def home all_words = Word.find(:all) @random_word = all_words.rand @random_words = Word.find(:all, :limit => 100, :order => 'rand()') @random_first = @random_words[1] end end As an aside, the SQL call with :limit => 100 is just in case I think of some way to cycle through those random words. Right now it's not useful. Also, the 'rand()' is MySQL specific, as far as I know. In the home page view (it's HAML), I have this: #character_box = render(:partial => "character", :object => @random_word) if @random_word #whatisthis = link_to_remote "? what is this?", :url => { :controller => 'words', :action => 'reveal_character' }, :html => { :title => "Click for the translation." } #tryanother.{:style => "display:none"} = link_to "try another?", root_path Note that the #'s in this case represent divs (with the given ids), not comments, because this is HAML. The "character" partial looks like this (it's erb, for no real reason): <div id="character"> <%= "#{@random_word.chinese}" } %> </div> <div id="revealed" style="display:none"> <ul> <li><span id="english"><%= "#{@random_word.english_name}" %></span></li> </ul> </div> The reveal_character.rjs file looks like this: page[:revealed].visual_effect :slide_down, :duration => '.2' page[:english].visual_effect :highlight, :startcolor => "#ffff00", :endcolor => "#ffffff", :duration => '2.5' page.delay(0.8) do page[:whatisthis].visual_effect :fade, :duration => '.3' page[:tryanother].visual_effect :appear end That all works perfectly fine. But if I try to turn link_to "try another?" into link_to_remote, and point it to an RJS template that replaces the "character" element with something new, it only works when I replace the innerHTML with static text. If I try to pass an instance variable in there, it never works. For instance, let's say I've defined a second random word under Pages#home... I'll add @random_second = @random_words[2] there. Then, in the home page view, I'll replace the "try another?" link (previously pointing to the root_path), with this: = link_to_remote "try another?", :url => { :controller => 'words', :action => 'second_character' }, :html => { :title => "Click for a new character." } I'll make that new RJS template, at app/views/words/second_character.rjs, and a simple test like this shows that it's working: page.replace_html("character", "hi") But if I change it to this: page.replace_html("character", "#{@random_second.english}") I get an error saying I fed it a nil object: ActionView::TemplateError (undefined method `english_name' for nil:NilClass) on line #1 of app/views/words/second_character.rjs: 1: page.replace_html("character", "#{@random_second.english}") Of course, actually instantiating @random_second, @random_third and so on ad infinitum would be ridiculous in a real app (I would eventually figure out some better way to keep grabbing a new random record without reloading the page), but the point is that I don't know how to get any instance variable to work here. This is not even approaching my ideal solution of rendering a partial that includes the object I specify, like this: page.replace_html 'character', :partial => 'new_character', :object => @random_second As I can't get an instance variable to work directly, I obviously cannot get it to work via a partial. I have tried various things like: :object => @random_second or :locals => { :random_second => @random_second } I've tried adding these all over the place -- in the link_to_remote options most obviously -- and studying what gets passed in the parameters, but to no avail. It's at this point that I realize I don't know what I'm doing. This is my first question here. I erred on the side of providing all necessary code, rather than being brief. Any help would be greatly appreciated.

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • How do I reinstate my admin user privileges to global read/write

    - by Matt
    I am running Ubuntu 12.04 LTS. I only have the one user which I created when I installed Ubuntu. Everything has been fine - love it - until I updated a software package recently from the command line using sudo (not gksudo). I was having a little bother which did not make sense to me and in a fluff changed my user read/write privileges through the GUI (not even clear how I got there!). After restart I was stuck in a login loop - using the right login password but kept getting looped back to the login and could only login as Guest. I could still login with my user/password via ctrl + alt + f1 Eventually I was able to login again at start up. Not sure exactly what it was I changed that worked but it was one of/or a combination of installing latest security updates, changing login manager from LightDM to DGM and back again, removing the ICE/Xauthority and chown user. Current dilemma is my primary admin user privileges were read only. In the command line ls -ls /home/user returned this value: drwx------ 48 username username 20480 I have since changed this using sudo chmod 0755 /home/username (from my limited understanding 755 should return my user privileges to their original read/write glory). ls -ld /home/user currently shows my user privileges as: drwxr-xr-x 48 username username 20480 I still seem to have only read access permissions. I've been through lots of threads (and the help file) that talk about creating new users/groups permissions etc. but specific info on returning my existing global/admin/primary users privileges to what they were when I first created that user - baffling me. I feel this is something really simple I'm just not getting it. Please help! sudo mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /proc type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusect1 (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=07pe tmpfs55) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw, ,nosuid,nodev,size=5242880 none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/meng/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=meng) none on /tmp/guest-1R2Fi5 type tmpsf (rw,mode=700)

    Read the article

  • Mutating the expression tree of a predicate to target another type

    - by Jon
    Intro In the application I 'm currently working on, there are two kinds of each business object: the "ActiveRecord" type, and the "DataContract" type. So for example, we have: namespace ActiveRecord { class Widget { public int Id { get; set; } } } namespace DataContracts { class Widget { public int Id { get; set; } } } The database access layer takes care of "translating" between hierarchies: you can tell it to update a DataContracts.Widget, and it will magically create an ActiveRecord.Widget with the same property values and save that. The problem I have surfaced when attempting to refactor this database access layer. The Problem I want to add methods like the following to the database access layer: // Widget is DataContract.Widget interface DbAccessLayer { IEnumerable<Widget> GetMany(Expression<Func<Widget, bool>> predicate); } The above is a simple general-use "get" method with custom predicate. The only point of interest is that I 'm not passing in an anonymous function but rather an expression tree. This is done because inside DbAccessLayer we have to query ActiveRecord.Widget efficiently (LINQ to SQL) and not have the database return all ActiveRecord.Widget instances and then filter the enumerable collection. We need to pass in an expression tree, so we ask for one as the parameter for GetMany. The snag: the parameter we have needs to be magically transformed from an Expression<Func<DataContract.Widget, bool>> to an Expression<Func<ActiveRecord.Widget, bool>>. This is where I haven't managed to pull it off... Attempted Solution What we 'd like to do inside GetMany is: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( predicate.Body, predicate.Parameters); // use lambda to query ActiveRecord.Widget and return some value } This won't work because in a typical scenario, for example if: predicate == w => w.Id == 0; ...the expression tree contains a MemberAccessExpression instance which has a MemberInfo property (named Member) that point to members of DataContract.Widget. There are also ParameterExpression instances both in the expression tree and in its parameter expression collection (predicate.Parameters); After searching a bit, I found System.Linq.Expressions.ExpressionVisitor (its source can be found here in the context of a how-to, very helpful) which is a convenient way to modify an expression tree. Armed with this, I implemented a visitor. This simple visitor only takes care of changing the types in member access and parameter expressions. It may not be complete, but it's fine for the expression w => w.Id == 0. internal class Visitor : ExpressionVisitor { private readonly Func<Type, Type> dataContractToActiveRecordTypeConverter; public Visitor(Func<Type, Type> dataContractToActiveRecordTypeConverter) { this.dataContractToActiveRecordTypeConverter = dataContractToActiveRecordTypeConverter; } protected override Expression VisitMember(MemberExpression node) { var dataContractType = node.Member.ReflectedType; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); var converted = Expression.MakeMemberAccess( base.Visit(node.Expression), activeRecordType.GetProperty(node.Member.Name)); return converted; } protected override Expression VisitParameter(ParameterExpression node) { var dataContractType = node.Type; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); return Expression.Parameter(activeRecordType, node.Name); } } With this visitor, GetMany becomes: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var visitor = new Visitor(...); var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( visitor.Visit(predicate.Body), predicate.Parameters.Select(p => visitor.Visit(p)); var widgets = ActiveRecord.Widget.Repository().Where(lambda); // This is just for reference, see below Expression<Func<ActiveRecord.Widget, bool>> referenceLambda = w => w.Id == 0; // Here we 'd convert the widgets to instances of DataContract.Widget and // return them -- this has nothing to do with the question though. } Results The good news is that lambda is constructed just fine. The bad news is that it isn't working; it's blowing up on me when I try to use it (the exception messages are really not helpful at all). I have examined the lambda my code produces and a hardcoded lambda with the same expression; they look exactly the same. I spent hours in the debugger trying to find some difference, but I can't. When predicate is w => w.Id == 0, lambda looks exactly like referenceLambda. But the latter works with e.g. IQueryable<T>.Where, while the former does not (I have tried this in the immediate window of the debugger). I should also mention that when predicate is w => true, it all works just fine. Therefore I am assuming that I 'm not doing enough work in Visitor, but I can't find any more leads to follow on. Can someone point me in the right direction? Thanks in advance for your help!

    Read the article

  • How to fix 'grub error file not found' when installing 12.04?

    - by Tomasz Grabowski
    i'm trying to install Ubuntu. I don't know if it is important, but i'm trying to install it on external HDD. In the end i have external bootable HDD which only displays: error: file not found grub recovery> From the beginning: I've downloaded ubuntu-12.04-desktop-i386.iso I've used LiLi USB Creator (LinuxLive) to create bootable pendrive from that image I've bootet from it, it works I've clicked "Try ubuntu", it works too. I've used GParted to look over drivers (disks) My primary embedded disk is seen as /dev/sda My attached external disk as /dev/sdb My PenDrive as /dev/sdc I've created partitions on /dev/sdb Fist partition for system (over 200GiB) Second was there already (it's xsf, and i don't want to touch it :P) Third is extended partition, with 1 locital partiton (10GiB) for swap I've started installation i've choose "somethin else" in ... i belive secound screeb then is selected /dev/sdb as boot disk for first partiton of /dev/sdb i set i want ext3 file system, i've check "formattin" checkbox, and mount path set to "/" firs logical partiton set as swap partition After installation finished, i restarted my computer. When i boot from my primary disc it's work ok, my previous operating system - vista - works ok. When i set my BIOS to boot from my external disc, i only get that message: error: file not found grub recovery> I've try to reinstall it, but didn't help... In desperation, i've try to read a bit about that "grub recovery" command-line and experiment a bit... I'm not sure if this has had any point, or if it give you some information (notice, that i don't know what i'm doing :P ) when i've type command: insmod (hd1,1)/boot/grub/linux.mod i've get message: unknown filesystem the same with: insmod (hd1,msdos1)/boot/grub/linux.mod the same with: insmod ext3 but i get no message after command: insmod ext2 ... notice that i really don't know what this command exactly do, but than i thought that maybe if i reinstall ubuntu with ext2 filesystem, it will work. I've done that, but symptoms are the same. I've go back to that Live version of ubuntu, filesystem and basics directories seems to be present on /dev/sdb1 ... i'm completely unfamiliar with GRUB. I'm also don't know which wersion of GRUB it is, i hope there is only one version on ubuntu-12.04-desktop-i386.iso Any help? Thax

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • CodePlex Daily Summary for Sunday, July 08, 2012

    CodePlex Daily Summary for Sunday, July 08, 2012Popular ReleasesBlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????FlMML customized: FlMML customized ??: FlMML customized ????。 ??、PCM??????????、??????。ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...testtom07052012git02: r1: r1Cypher Bot: Cypher Bot 4.1: Cypher Bot is the most advanced encryption tool on the planet.... and now it actually works. That's right we fixed the bugs! For a full program summary go to the Home Page or visit www.iRareMedia.com So what's new? We've pretty much fixed all the bugs, but here's a run down if you wanna know exactly what's different: Fixed Installation / Setup Error, where an error message would display: "No Internet Connection, Try Again Later" Fixed File Encryption / Decryption error where the file exten...Coding4Fun Kinect Service: Coding4Fun Kinect Service v1.5: Requires Kinect for Windows SDK v1.5 Minor bug fixes + Kinect for Windows SDK v1.5 Aligning version with the Kinect for Windows SDK requiredtedplay: tedplay 1.1: tedplay 1.1 source and Win32 binary is out now. Changes are: SID card support Commodore 64 PSID music format support optimized FIR filter global hotkeys for skipping tracks (Windows only) module properties window (Windows only) mutable noise channel via GUI button (Windows only) disable SID card from the menu (Windows only) bugfixes PSID tunes are played on the C64 clock frequency but in a Commodore plus/4 virtual machine. The purpose is not to have yet another SID player, but t...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsNew ProjectsAdventures of Adventure Land: Adventures of Adventure Land is a new text based adventure. You will be able to level up, fight challenging enemies, use magical spells, and simply adventure.AdventureWorks Silverlight samples: AdventureWorks Silverlight samplesAFS.Collab.Duplex: my duplexarmmychan: ????????C# - WPF - .Net - MSSQL - Open Source Restaurant POS System: A C# .Net / WPF / MSSQL Restaurant Point of Sale (POS) Software that is PCI-DSS 2.0 Compliant running stable in many restaurants / integrated with MercuryPay.dcview: dcinside.com? ??? ? ?? ?? ??? ???? ???.DotNetNuke Contact Form: simple yet effective DotNetNuke contact formISBNdb.com Helper Library: A .net helper library that encapsulates all the functionality of the API at www.isbndb.com in .NET CLR objects.JPO Class Register: Simple class register.NACHA C# Class with WPF Test App: Do you need to generate a NACHA PPD file? This is great starting point for you! Actually, it's a great, almost finished, point for you! More info below.NotificationsWidget In ASP MVC: SummaryPayPal Manager: I decided to make a basic (for now) desktop client for PayPal to get better at WebRequests in VB.NET. I will be adding much more such as sending payments, etc.Pomodoro Timer Count Down App: Pomodoro count down timer Application Features ------------------------------- Pomodoro Mode Count down timer mode Start stop pause Notification Powershell HTML Highlight: Powershell html syntax highlightingProject F10_P1: F10 p1Razor-sharp your skills: This project will have details about the C# 2.0 C# 3.0 C# 4.0 C# 5.0 Salaria: Bienvenue sur notre projet "Salaria".SharePoint 2010 - Unlock SP Files: Unlock any file in SharePoint or get lock information of a SharePoint file.( Compatible with office 365) ""The file "" is locked for exclusive use by""SharePoint 2010 Metro Masterpage: This project will give you a full metro masterpage for sharepoint 2010SharePoint Cache Refresh Framework: This project's aim is build a small and easy to use framework for SharePoint developers to be able to control cached objects across servers in a SharePoint FarmTFSProjectTest: A test project.uhimania test project: testUpgrade SPSolution: This is a Sharepoint 2010 Management Shell cmdlet, which upgrades a sharepoint solution and installs/activates any new features added in the package.Video Frame Explorer: Video Frame Explorer allows you to make thumbnails (caps, previews) of video files. It supports of practically any videos-formats (even MP4, MKV, MOV if you havWML: WML

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • Error installing RVM

    - by Dbugger
    I am following this guide, but this is the output I receive. What am the problem? dbugger@mercury:~$ \curl -sSL https://get.rvm.io | bash -s stable --rails Downloading https://github.com/wayneeseguin/rvm/archive/stable.tar.gz Upgrading the RVM installation in /home/dbugger/.rvm/ RVM PATH line found in /home/dbugger/.profile /home/dbugger/.bashrc /home/dbugger/.zshrc. RVM sourcing line found in /home/dbugger/.bash_profile /home/dbugger/.zlogin. Upgrade of RVM in /home/dbugger/.rvm/ is complete. # Enrique, # # Thank you for using RVM! # We sincerely hope that RVM helps to make your life easier and more enjoyable!!! # # ~Wayne, Michal & team. In case of problems: http://rvm.io/help and https://twitter.com/rvm_io Upgrade Notes: * No new notes to display. rvm 1.25.27 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] Searching for binary rubies, this might take some time. No binary rubies available for: ubuntu/14.04/x86_64/ruby-2.1.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for ubuntu. Installing requirements for ubuntu. Updating system.......... Installing required packages: gawk, libreadline6-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3.... Error running 'requirements_debian_libs_install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3', showing last 15 lines of /home/dbugger/.rvm/log/1401804140_ruby-2.1.2/package_install_gawk_libreadline6-dev_libssl-dev_libyaml-dev_libsqlite3-dev_sqlite3.log ++ /scripts/functions/utility : __rvm_try_sudo() 405 > sudo -p '%p password required for '\''apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3'\'': ' apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2) but 1.0.1f-1ubuntu2.1 is to be installed E: Unable to correct problems, you have held broken packages. ++ /scripts/functions/utility : __rvm_try_sudo() 405 > return 100 ++ /scripts/functions/requirements/ubuntu : requirements_debian_libs_install() 36 > return 100 Requirements installation failed with status: 100.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >