Search Results

Search found 2624 results on 105 pages for 'annoying beep'.

Page 80/105 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to Quickly Add Multiple IP Addresses to Windows Servers

    - by Sysadmin Geek
    If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution. Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds. Adding an IP Address from the Command Line Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters: netsh interface ipv4 add address Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command. If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this: netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0 The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”. Adding Multiple IP Addresses at Once When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this: FOR /L %variable IN (start,step,end) DO command So we could easily add every IP address from an entire subnet using this command: FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer. A Quick Demonstration Here is the initial configuration on our network adapter: ipconfig /all Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter: FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 After the above command is run, viewing the IP Configuration of the adapter now shows: Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

  • Find a Faster DNS Server with Namebench

    - by Mysticgeek
    One way to speed up your Internet browsing experience is using a faster DNS server. Today we take a look at Namebench, which will compare your current DNS server against others out there, and help you find a faster one. Namebench Download the file and run the executable (link below). Namebench starts up and will include the current DNS server you have configured on your system. In this example we’re behind a router and using the DNS server from the ISP. Include the global DNS providers and the best available regional DNS server, then start the Benchmark. The test starts to run and you’ll see the queries it’s running through. The benchmark takes about 5-10 minutes to complete. After it’s complete you’ll get a report of the results. Based on its findings, it will show you what DNS server is fastest for your system. It also displays different types of graphs so you can get a better feel for the different results. You can export the results to a .csv file as well so you can present the results in Excel. Conclusion This is a free project that is in continuing development, so results might not be perfect, and there may be more features added in the future. If you’re looking for a method to help find a faster DNS server for your system, Namebench is a cool free utility to help you out. If you’re looking for a public DNS server that is customizable and includes filters, you might want to check out our article on helping to protect your kids from questionable content using OpenDNS. You can also check out how to speed up your web browsing with Google Public DNS. Links Download NameBench for Windows, Mac, and Linux from Google Code Learn More About the Project on the Namebench Wiki Page Similar Articles Productive Geek Tips Open a Second Console Session on Ubuntu ServerShare Ubuntu Home Directories using SambaSetup OpenSSH Server on Ubuntu LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Search For Rows With Special Characters in SQL Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • Impulsioned jumping

    - by Mutoh
    There's one thing that has been puzzling me, and that is how to implement a 'faux-impulsed' jump in a platformer. If you don't know what I'm talking about, then think of the jumps of Mario, Kirby, and Quote from Cave Story. What do they have in common? Well, the height of your jump is determined by how long you keep the jump button pressed. Knowing that these character's 'impulses' are built not before their jump, as in actual physics, but rather while in mid-air - that is, you can very well lift your finger midway of the max height and it will stop, even if with desacceleration between it and the full stop; which is why you can simply tap for a hop and hold it for a long jump -, I am mesmerized by how they keep their trajetories as arcs. My current implementation works as following: While the jump button is pressed, gravity is turned off and the avatar's Y coordenate is decremented by the constant value of the gravity. For example, if things fall at Z units per tick, it will rise Z units per tick. Once the button is released or the limit is reached, the avatar desaccelerates in an amount that would make it cover X units until its speed reaches 0; once it does, it accelerates up until its speed matches gravity - sticking to the example, I could say it accelerates from 0 to Z units/tick while still covering X units. This implementation, however, makes jumps too diagonal, and unless the avatar's speed is faster than the gravity, which would make it way too fast in my current project (it moves at about 4 pixels per tick and gravity is 10 pixels per tick, at a framerate of 40FPS), it also makes it more vertical than horizontal. Those familiar with platformers would notice that the character's arc'd jump almost always allows them to jump further even if they aren't as fast as the game's gravity, and when it doesn't, if not played right, would prove itself to be very counter-intuitive. I know this because I could attest that my implementation is very annoying. Has anyone ever attempted at similar mechanics, and maybe even succeeded? I'd like to know what's behind this kind of platformer jumping. If you haven't ever had any experience with this beforehand and want to give it a go, then please, don't try to correct or enhance my explained implementation, unless I was on the right way - try to make up your solution from scratch. I don't care if you use gravity, physics or whatnot, as long as it shows how these pseudo-impulses work, it does the job. Also, I'd like its presentation to avoid a language-specific coding; like, sharing us a C++ example, or Delphi... As much as I'm using the XNA framework for my project and wouldn't mind C# stuff, I don't have much patience to read other's code, and I'm certain game developers of other languages would be interested in what we achieve here, so don't mind sticking to pseudo-code. Thank you beforehand.

    Read the article

  • Subterranean IL: Fault exception handlers

    - by Simon Cooper
    Fault event handlers are one of the two handler types that aren't available in C#. It behaves exactly like a finally, except it is only run if control flow exits the block due to an exception being thrown. As an example, take the following method: .method public static void FaultExample(bool throwException) { .try { ldstr "Entering try block" call void [mscorlib]System.Console::WriteLine(string) ldarg.0 brfalse.s NormalReturn ThrowException: ldstr "Throwing exception" call void [mscorlib]System.Console::WriteLine(string) newobj void [mscorlib]System.Exception::.ctor() throw NormalReturn: ldstr "Leaving try block" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } fault { ldstr "Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } Return: ldstr "Returning from method" call void [mscorlib]System.Console::WriteLine(string) ret } If we pass true to this method the following gets printed: Entering try block Throwing exception Fault handler and the exception gets passed up the call stack. So, the exception gets thrown, the fault handler gets run, and the exception propagates up the stack afterwards in the normal way. If we pass false, we get the following: Entering try block Leaving try block Returning from method Because we are leaving the .try using a leave.s instruction, and not throwing an exception, the fault handler does not get called. Fault handlers and C# So why were these not included in C#? It seems a pretty simple feature; one extra keyword that compiles in exactly the same way, and with the same semantics, as a finally handler. If you think about it, the same behaviour can be replicated using a normal catch block: try { throw new Exception(); } catch { // fault code goes here throw; } The catch block only gets run if an exception is thrown, and the exception gets rethrown and propagates up the call stack afterwards; exactly like a fault block. The only complications that occur is when you want to add a fault handler to a try block with existing catch handlers. Then, you either have to wrap the try in another try: try { try { // ... } catch (DirectoryNotFoundException) { // ... // leave.s as normal... } catch (IOException) { // ... throw; } } catch { // fault logic throw; } or separate out the fault logic into another method and call that from the appropriate handlers: try { // ... } catch (DirectoryNotFoundException ) { // ... } catch (IOException ioe) { // ... HandleFaultLogic(); throw; } catch (Exception e) { HandleFaultLogic(); throw; } To be fair, the number of times that I would have found a fault handler useful is minimal. Still, it's quite annoying knowing such functionality exists, but you're not able to access it from C#. Fortunately, there are some easy workarounds one can use instead. Next time: filter handlers.

    Read the article

  • Installing Ubuntu 12.04 on NUC intel i3 DC3217IYE

    - by Kieron
    System: NUC i3, 2 hdmi ports, ethernet, no wireless. UEFI boot 2x2gb ram 30gb mSATA internal drive Ubuntu 64bit 12.04.03 Hello i am having much trouble loading an OS on my NUC. I started out attempting another OS with various loaders (beast/hack) without much success,(various panics on boot or endless reboot loops, or graphics failures) after many tries i decided to attempt Ubuntu. Many years ago i loaded Ubuntu on an e-machine without an issue so i figured it would go smoothly, nothing could be further from the truth. The Live USB stick loads, but when i am installing the OS it always fails to load grub 2. Obviously it wont boot from SSD without grub2. I searched and found that Boot-repair should fix it...so i created a live usb with boot repair...it refuses to repair the grub because the install never finished and the needed partions are not fully created and flagged. It also demands an internet connection which i am unable to provide. I then ran gparted in an attempt to create the needed partitions manually, then re-ran boot repair turning off the "check internet" option. and disabling the re-install grub hoping it would create the missing directories and or fix the flags. it appeared to run successfully but upon return to the Ubuntu live USB it still fails at the grub2 install. also gparted doesnt have the same choices that Ubuntu install has when creating partitions, causing it to not recognize that i already had a root, or an EFI directory or it sometimes couldnt tell what the format of the partition was...all very annoying the reason i cant connect to internet (and cant upload the error logs) is the nuc only has Ethernet and the location i have to set up is too far away from modem. i can not move the monitor closer to modem as it is a 50inch LCD. I just want to do a basic install with one user acct and remote desktop (vnc) turned on so i can move the NUC to the modem connect via ethernet and then finish setting it up via Remote desktop/VNC chicken from my mac. While i await any assistance you maybe able to provide i am going to attempt to switch to the 32bit version and legacy boot to see if that can load grub. thnx again to anyone that can come up with a possible solution. i would love to hit "erase and install ubuntu" if anyone can figure out what is stopping that simple answer from working. Also disks (CD/DVD) are not an option as neither my Mac mini or my NUC have optical drives, and i have no desire to buy one for one task

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • To Serve Man?

    - by Dave Convery
    Since the announcement of Windows 8 and its 'Metro' interface, the .NET community has wondered if the skills they've spent so long developing might be swept aside,in favour of HTML5 and JavaScript. Mercifully, that only seems to be true of SilverLight (as Simon Cooper points out), but it did leave me thinking how easy it is to impose a technology upon people without directly serving their needs. Case in point: QR codes. Once, probably, benign in purpose, they seem to have become a marketer's tool for determining when someone has engaged with an advert in the real world, with the same certainty as is possible online. Nobody really wants to use QR codes - it's far too much hassle. But advertisers want that data - they want to know that someone actually read their billboard / poster / cereal box, and so this flawed technology is suddenly everywhere, providing little to no value to the people who are actually meant to use it. What about 3D cinema? Profits from the film industry have been steadily increasing throughout the period that digital piracy and mass sharing has been possible, yet the industry cinema chains have forced 3D films upon a broadly uninterested audience, as a way of providing more purpose to going to a cinema, rather than watching it at home. Despite advances in digital projection, 3D cinema is scarcely more immersive to us than were William Castle's hoary old tricks of skeletons on wires and buzzing chairs were to our grandparents. iTunes - originally just a piece of software that catalogued and ripped music for you, but which is now multi-purpose bloatware; a massive, system-hogging behemoth. If it was being built for the people that used it, it would have been split into three or more separate pieces of software long ago. But as bloatware, it serves Apple primarily rather than us, stuffed with Music, Video, Various stores and phone / iPad management all bolted into one. Why? It's because, that way, you're more likely to bump into something you want to buy. You can't even buy a new laptop without finding that a significant chunk of your hard drive has been sold to 'select partners' - advertisers, suppliers of virus-busting software, and endless bloatware-flogging pop-ups that make using a new laptop without reformatting the hard drive like stepping back in time. The product you want is not the one you paid for. This is without even looking at services like Facebook and Klout, who provide a notional service with the intention of slurping up as much data about you as possible (in Klout's case, whether you create an account with them or not). What technologies do you find annoying or intrusive, and who benefits from keeping them around?

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • How To Completely Disable Subtitles in VLC

    - by The Geek
    If you watch a lot of videos using VLC, you might have noticed that it enables subtitles by default if they are there, which can be pretty annoying at times. Here’s the quick tip to disable them entirely. Of course, you can always turn them back on if you want on an individual video basis. Disable Subtitles Head into the VLC preferences, and then click the All button at the bottom of the screen. On the left-hand side, choose Video –> Subtitles/OSD, and then uncheck the boxes for “Autodetect subtitle files”, Enable sub-pictures, and On Screen Display. That should do it, unless the subtitles are forced in the video for some reason. Note: Certain video formats like MKV can sometimes have subtitles enabled even though there isn’t a separate subtitles file. This is why you need to remove “Enable sub-pictures” as well, which totally disables the on-screen text. You can choose to only uncheck the autodetecting of subtitles instead if you’d prefer. And of course, you can simply right-click on the video, head to Video –> Subtitles Track and then choose the subtitles if you still wanted them. Note: this only works if the “enable sub-pictures” option is still enabled. And thus ends the tale of disabling those fracking subtitles. Starbuck approves. Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Disable ProFTP on CentOSDisable Notification Balloons in XPHow To (Really) Completely Disable UAC on Windows 7Disable User Account Control (UAC) the Easy Way on Win 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • NVIDIA 550M Drivers

    - by DOOM
    First to say, I am newbie, to the linux world and in situation where i have to get used to using ubuntu. My system is: i7-2630QM 8GB Ram with 750GB HD + NVIDIa 550M (1GB) Since i was facing problems with wubi - "slow Ubuntu", i installed it on a separate partition, of 80GB (10GB Swap, 40GB HOME, 30GB ROOT). The system was running fine, before I started using some "CFD" (Graphics Dependent) application. I needed to use "paraview" a graphics software, to design some engineering stuff. Following the installation problems, i installed something called "messa" on my system. The software works but, now my system is "dead-slow". Even with the computer at an idle state, the laptop, is pretty slow, and with huge annoying fan noise. I was tried running some similar applications with windows, and it turns out that its not a hardware, but has to do with the GPU drivers. Following some forums, I installed the "nvidia-current" and now everything is the same. I know, there are many solutions on this forum for nvidia-driver updates, but as you see, nothings working my way. Please someone, tell me what is that I am doing wrong :( This is the output from my terminal for the command lspci | grep 'intel' 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM67 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1030 (rev 34)

    Read the article

  • Registering domain during christmas holydays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25.12.2012 (christmas holyday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holyday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holyday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an autoomatic registration attempt on a holyday possible? Where can I do that? Is that reliable? Thanks for any reply!

    Read the article

  • Make the Taskbar Buttons Switch to the Last Active Window in Windows 7

    - by The Geek
    The new Windows 7 taskbar’s Aero Peek feature, with the live thumbnails of every window, is awesome… but sometimes you just want to be able to click the taskbar button and have the last open window show up instead. Here’s a quick hack to make it work better. To better understand the problem, imagine having nine windows of the same type open on your screen, but you are primarily working in just one of the windows at a time. So every time you want to switch back, you have to click the taskbar button, and then choose the one you are using from the list, which can be pretty annoying… Now if you know your Windows 7 shortcuts, you’d know that you can simply hold down the Ctrl key while clicking on the taskbar button, and the last window will show up. In fact, you can keep holding down the Ctrl key and keep clicking, and Windows will cycle through the open windows. It’s a useful shortcut, but hardly something you want to do every single time. Instead, we’ll use a quick registry hack to make the normal click switch to the last open window—if you still want to see the thumbnail list, just hover your mouse over the button for half a second to see the full list. Manual Registry Hack for Last Active Window Open up regedit.exe through the start menu search or run box, and then head down to the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced Once you’re there, create a new 32-bit DWORD value on the right hand side, give it the name LastActiveClick, and set the value to 1. Once you are done, it should look something like this: Once you are done, you’ll have to log off and back on, or you can kill Explorer.exe through Task Manager and re-open it. Download the Registry Hack Instead Since you probably don’t feel like registry hacking, we’ve provided you an easy downloadable version. You can simply download the file, extract it, and then double-click on the LastActiveClick.reg file. Once you are done, you’ll have to log off and back on, just like with the manual registry hack. Download LastActiveClick Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Make the Windows 7 Taskbar Work More Like Windows XP or VistaStupid Geek Tricks: Select Multiple Windows on the TaskbarReorganize Your Taskbar Buttons and Tray Icons in XP/VistaKeyboard Ninja: Create a Hotkey to Switch to Your Open Outlook WindowTaskbar Eliminator Does What the Name Implies: Hides Your Windows Taskbar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I would like to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I am very frustrated because I've searched for 17 hours straight for solution and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offends anyone, I am truly sorry and I apologize. Thanks in advance.

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • Alternatives to multiple inheritance for my architecture (NPCs in a Realtime Strategy game)?

    - by Brettetete
    Coding isn't that hard actually. The hard part is to write code that makes sense, is readable and understandable. So I want to get a better developer and create some solid architecture. So I want to do create an architecture for NPCs in a video-game. It is a Realtime Strategy game like Starcraft, Age of Empires, Command & Conquers, etc etc.. So I'll have different kinds of NPCs. A NPC can have one to many abilities (methods) of these: Build(), Farm() and Attack(). Examples: Worker can Build() and Farm() Warrior can Attack() Citizen can Build(), Farm() and Attack() Fisherman can Farm() and Attack() I hope everything is clear so far. So now I do have my NPC Types and their abilities. But lets come to the technical / programmatical aspect. What would be a good programmatic architecture for my different kinds of NPCs? Okay I could have a base class. Actually I think this is a good way to stick with the DRY principle. So I can have methods like WalkTo(x,y) in my base class since every NPC will be able to move. But now lets come to the real problem. Where do I implement my abilities? (remember: Build(), Farm() and Attack()) Since the abilities will consists of the same logic it would be annoying / break DRY principle to implement them for each NPC (Worker,Warrior, ..). Okay I could implement the abilities within the base class. This would require some kind of logic that verifies if a NPC can use ability X. IsBuilder, CanBuild, .. I think it is clear what I want to express. But I don't feel very well with this idea. This sounds like a bloated base class with too much functionality. I do use C# as programming language. So multiple inheritance isn't an opinion here. Means: Having extra base classes like Fisherman : Farmer, Attacker won't work.

    Read the article

  • Fuji camera "mounts" but folder not in Dolphin After Kubuntu 13.10 upgrade

    - by user207207
    Fuji camera mount reported in attached devices but not visible in Dolphin After Kubuntu 13.10 upgrade Have reinstalled the driver, and a few other suggestions, for other cameras mounts failing on previous Ubuntu upgrades. I have already spent a couple of hours trying to get my photo's off the camera, very annoying. Worked perfectly in 11.04, 11.10, 12.04, 12.10 and 13.04. dmesg | tail; lsusb; lsb_release -a [ 6181.858786] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90009400000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396236] CPUM: APIC 00 at 00000000fee00000 (mapped at ffffc90000c6a000) - ver 0x80050010, lint0=0x10700 lint1=0x00400 pc=0x00400 thmr=0x10000 [17261.396239] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90000c72000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396241] CPUM: APIC 02 at 00000000fee00000 (mapped at ffffc90000c70000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396255] CPUM: APIC 01 at 00000000fee00000 (mapped at ffffc90000c6e000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [32456.884907] usb 2-5: new high-speed USB device number 2 using ehci-pci [32457.654046] usb 2-5: New USB device found, idVendor=04cb, idProduct=01e8 [32457.654050] usb 2-5: New USB device strings: Mfr=0, Product=2, SerialNumber=3 [32457.654052] usb 2-5: Product: Digital Camera [32457.654053] usb 2-5: SerialNumber: 4C3230302020091117CAA59WP18548 Bus 002 Device 002: ID 04cb:01e8 Fuji Photo Film Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 2013:024f PCTV Systems nanoStick T2 290e Bus 001 Device 002: ID 046d:082d Logitech, Inc. HD Pro Webcam C920 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub No LSB modules are available. Distributor ID: Ubuntu vissibleDescription: Ubuntu 13.10 Release: 13.10 Codename: saucy sudo apt-get install gvfs-bin gvfs-mount gphoto2://[usb:002,002] Error mounting location: Error initializing camera: -108: No such file or directory ...... I have reported a bug in Dolphin, which has been transferred to Solid. Further information : I ran solid-hardware list details udi = '/org/kde/solid/udev/sys/devices/pci0000:00/0000:00:04.1/usb2/2-5' parent = '/org/kde/solid/udev' (string) vendor = '04cb' (string) product = 'Digital Camera' (string) description = 'Camera' (string) Block.major = 189 (0xbd) (int) Block.minor = 137 (0x89) (int) Block.device = '/dev/bus/usb/002/010' (string) Camera.supportedProtocols = {'ptp'} (string list) Camera.supportedDrivers = {'gphoto'} (string list) I still can't get my photo's off, I can see the folders using the Gimp menu. If anyone has got any ideas, I'm willing to try them.

    Read the article

  • Registering domain during Christmas holidays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25 Dec 2012 (Christmas holiday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: Is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holiday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holiday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an automatic registration attempt on a holiday possible? Where can I do that? Is that reliable?

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >