Search Results

Search found 12264 results on 491 pages for 'drive letter'.

Page 210/491 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Restore Files from Backups on Windows Home Server

    - by Mysticgeek
    If you use Windows Home Server to backup the machines on your network, your in luck if you accidentally delete important files or they become corrupted. Today we take a look at getting your data back from backups on your home server. Open Windows Home Server Console and click select the Computers and Backup tab. Right-click on the computer you need to restore files for and select View Backups. This will open a list of your recent backups. Highlight the one you want to open, then click the Open button in the Restore or View Files section. If this is the first time you’re restoring a file, you’ll be asked to verify installation of the device software. Check the box next to Always trust software from Microsoft Corporation and click Install. Now wait while the backup data is retrieved. After the backup data has been retrieved, an explorer windows opens up to drive (Z:) which is the backup data. It’s just like if you were opening a drive on your local machine. Now you can browse through the backup and find the files your missing. You can open the files directly, or drag them onto your machine to the location you want to restore them.   Restoring your data is actually a very easy process with Windows Home Server. Of course you’ll want to make sure the computers on your network are being backed up to WHS. if you need help with that, check out our article on how to configure your computer to backup to WHS. If you want to backup your home server shares, check out our article on how to backup WHS folder to an external drive. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerRestore Your PC from Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscInstalling Windows Home ServerConfigure Your Computer to Backup to Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Attempted to dual boot with Windows and now can only run Ubuntu

    - by Zeusoflightning125
    Very recently, I decided to attempt to dual boot Ubuntu with my already installed windows 8. Everything worked perfectly, I manually set up disk partitions (this is all on 1 hard drive), and it loaded up Ubuntu fine. HOWEVER, now when I try to load up my computer it only has 2 options in the boot menu and both just load up Windows (both were something related to hard disk). I also can only boot from legacy hard disk things. (I already only was able to aside from my USB that I installed Windows from) The Windows files are still accessible from Ubuntu, but I cannot just load Windows. There is no option to. I also don't have the 2 buttons for each operating system I was expecting. I can only select the thing to load from BIOS. So, my question is, how do I load the Windows partition on my hard drive? I'm sorry if I'm a bit clueless I am just new to both Linux and dual-booting.

    Read the article

  • Ubuntu 12.10 MBR not loading from dual boot selector

    - by Justin Holmes
    Since I am new to the forums here I cannot post pictures, but hopefully this link to my error screen will work. Error when selecting to boot the Ubuntu mbr Basically no matter what I do, windows boot manager wants to load windows and windows only. I tried running from my hard drive and also from a USB drive that had live installed. Both times I ended up with this message. I am running windows 7 with all the newest updates, on a Samsung series 7 Laptop. I have had many dual boot machines in the past and never seen this issue. I even had a dual booting windows 7 machine a few months ago and had no issues at all. Any help would be appreciated! Thanks! Justin

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • A Small Utility to Delete Files recursively by Date

    - by Rick Strahl
    It's funny, but for me the following seems to be a recurring theme: Every few months or years I end up with a host of files on my server that need pruning selectively and often under program control. Today I realized that my SQL Server logs on my server were really piling up and nearly ran my backup drive out of drive space. So occasionally I need to check on that server drive and clean out files. Now with a bit of work this can be done with PowerShell or even a complicated DOS batch file, but heck, to me it's always easier to just create a small Console application that handles this sort of thing with a full command line parser and a few extra options, plus in the end I end up with code that I can actually modify and add features to as is invariably the case. No more searching for a script each time :-) So for my typical copy needs the requirements are: Need to recursively delete files Need to be able to specify a filespec (ie. *.bak) Be able to specify a cut off date before which to delete files And it'd be nice to have an option to send files to the Recycle bin just in case for operator error :-)(and yes that came in handy as I blew away my entire database backup folder by accident - oops!) The end result is a small Console file copy utility that I popped up on Github: https://github.com/RickStrahl/DeleteFiles The source code is up there along with the binary file you can just run. Creating DeleteFiles It's pretty easy to create a simple utility like DeleteFiles of course, so I'm not going to spend any talking about how it works. You can check it out in the repository or download and compile it. The nice thing about using a full programming language like C over something like PowerShell or batch file is that you can make short work of the recursive tree walking that's required to make this work. There's very little code, but there's also a very small, self-contained command line parser in there that might be useful that can be plugged into any project - I've been using it quite a bit for just about any Console application I've been building. If you're like me and don't have the patience or the persistence (that funky syntax requires some 'sticking with it' that I simply can't get over) to get into Powershell coding, having an executable file that I can just copy around or keep in my Utility directory is the only way I'll ever get to reuse this functionality without going on a wild search each time :-) Anyway, hope some of you might find this useful. © Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Changing the default installation path to a newly installed hard disk

    - by mgj
    Hi, I am currently working on a dual-booted PC. I am using Windows XP and Ubuntu 10.04 Lucid Lynx released in April 2010. The allocated partition to Ubuntu that I am making use of has almost exhausted. Current memory allocations on the PC wrt Ubuntu OS looks like this: bodhgaya@pc146724-desktop:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 8.6G 8.0G 113M 99% / none 998M 268K 998M 1% /dev none 1002M 580K 1002M 1% /dev/shm none 1002M 100K 1002M 1% /var/run none 1002M 0 1002M 0% /var/lock none 1002M 0 1002M 0% /lib/init/rw /dev/sda1 25G 16G 9.8G 62% /media/C /dev/sdb1 37G 214M 35G 1% /media/ubuntulinuxstore bodhgaya@pc146724-desktop:~$ cd /tmp I am trying to mount a 40GB(/dev/sdb1 - given below) new hard disk along with my existing Ubuntu system to overcome with hard disk space related issues. I referred to the following tutorial to mount a new hard disk onto the system:- http://www.smorgasbord.net/how-to-in...untu-linux%20/ I was able to successfully mount this hard disk for Ubuntu 0S. I have this new hard disk setup in /media/ubuntulinuxstore directory. The current partition in my system looks like this: bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ sudo fdisk -l [sudo] password for bodhgaya: Disk /dev/sda: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x446eceb5 Device Boot Start End Blocks Id System /dev/sda1 * 2 3264 26210047+ 7 HPFS/NTFS /dev/sda2 3265 4385 9004432+ 83 Linux /dev/sda3 4386 4863 3839535 82 Linux swap / Solaris Disk /dev/sdb: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfa8afa8a Device Boot Start End Blocks Id System /dev/sdb1 1 4862 39053983+ 7 HPFS/NTFS bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ Now, I have a concern wrt the "location" where the new softwares will be installed. Generally softwares are installed via the terminal and by default a fixed path is used to where the post installation set up files can be found (I am talking in context of the drive). This is like the typical case of Windows, where softwares by default are installed in the C: drive. These days people customize their installations to a drive which they find apt to serve their purpose (generally based on availability of hard disk space). I am trying to figure out how to customize the same for Ubuntu. As we all know the most softwares are installed via commands given from the Terminal. My road block is how do I redirect the default path set on the terminal where files get installed to this new hard disk. This if done will help me overcome space constraints I am currently facing wrt the partition on which my Ubuntu is initially installed. I would also by this, save time on not formatting my system and reinstalling Ubuntu and other softwares all over again. Please help me with this, your suggestions are much appreciated.

    Read the article

  • Darth Vader Wins Big [Humorous Comic]

    - by Asian Angel
    Everyone’s favorite Star Wars villain receives a notice in the mail saying he won a contest, but did he really hit it big or is karma dishing out some payback? Note: Make sure to take a close look at the letter shown in the second panel for an additional laugh! Darth Vader Wins Big (Dorkly) [via Neatorama] HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • Gawker Passwords

    - by Nick Harrison
    There has been much news about the hack of the Gawker web sites. There has even been an analysis of the common passwords found. This list is embarrassing in many ways. The most common password was "123456". The second most common password was "password". Much has also been written providing advice on how to create good passwords. This article provides some interesting advice, none of which should be taken. Anyone reading my blog, probably already knows the importance of strong passwords, so I am not going to reiterate the reasons here. My target audience is more the folks defining password complexity requirements. A user cannot come up with a strong password, if we have complexity requirements that don't make sense. With that in mind, here are a few guidelines:  Long Passwords Insist on long passwords. In some cases, you may need to change to allow a long password. I have seen many places that cap passwords at 8 characters. Passwords need to be at least 8 characters minimal. Consider how much stronger the passwords would be if you double the length. Passwords that are 15-20 characters will be that much harder to crack. There is no need to have limit passwords to 8 characters. Don't Require Special Characters Many complexity rules will require that your password include a capital letter, a lower case letter, a number, and one of the "special" characters, the shits above the number keys. The problem with such rules is that the resulting passwords are harder to remember. It also means that you will have a smaller set of characters in the resulting passwords. If you must include one of the 9 digits and one of the 9 "special" characters, then you have dramatically reduced the character set that will make up the final password. Two characters will be one of 10 possible values instead of one of 70. Two additional characters will be one of 26 possible characters instead of a 70 character potential character set. If you limit passwords to 8 characters, you are left with only 7 characters having the full set of 70 potential values. With these character restrictions in place, there are 1.6 x1012 possible passwords. Without these special character restrictions, but allowing numbers and special characters, you get a total of 5.76x1014 possible passwords. Even if you only allowed upper and lower case characters, you will still have 2.18X1014 passwords. You can do the math any number of ways, requiring special characters will always weaken passwords. Now imagine the number of passwords when you require more than 8 characters.  If you are responsible for defining complexity rules, I urge you to take these guidelines into account. What other guidelines do you follow?

    Read the article

  • Managing Multiple dedicated servers centrally using a Web GUI tools?

    - by Sampath
    Application Architecture I am having a single ruby on rails application code running with multiple instances (ie. each client having identical sub domains) running on a multiple dedicated server using phusion passenger + nginx. sub domains setup done using vhost option in nginx passenger module. For Example server 1 serving 1 - 100 client with identical sub domains www.client1.product.com upto www.client100.product.com server 2 serving 101 - 200 client with identical sub domains www.client101.product.com upto www.client200.product.com server 3 serving 201 - 300 client with identical sub domains www.client201.product.com upto www.client300.product.com What my question is i need to centrally manage all my N dedicated servers using an gui tool I am looking for Web GUI tool to manage tasks like 1) backup all mysql databases automatically from all dedicated servers and send it to an some FTP backup drive 2) back files and folders from all dedicated servers and send it to an some FTP backup drive 3) need to manage firewall (CSF http://configserver.com/cp/csf.html) centrally for all dedicated servers 4) look to see server load , bandwidth used in graphical manner for all N no of dedicated servers Note: I am prefer to looking for an open source solution

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by J Swaroop
    Re-posting Bruce Tierney's original post - albeit a day late: I reckon this is Oracle's most interesting launch this year. Enjoy! The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • Make Chrome’s New Tab Page More Useful and Artistic

    - by Asian Angel
    Are you tired of the default New Tab Page in Google Chrome and want something more useful and artistic? Then join us as we look at the Incredible StartPage extension. Before Here is the default “New Tab Page” in our Chrome Browser…it looks rather plain and boring. How about something better? Incredible StartPage in Action This is what our “New Tab Page” looked like after installing the extension. As you can see there is a “Note Section”, “Closed Tabs Section”, “All Bookmarks Section”, and a “Bookmarks Toolbar (links only) Section”. Note: Clicking on links in Incredible StartPage will open them in the current tab. If you want you can easily modify how Incredible StartPage looks using the “Options” in the upper right corner. After only a couple of minutes our “New Tab Page” was looking nice…new background color, image, and altered note. A very useful feature of the “Note Section” is that you can add your notes to an e-mail by clicking on the “Post to Gmail Link” just below the note. Note: Special “Chrome Pages” (i.e. Extensions) will not open from the “Closed Tabs Section”. When you click on “Post to Gmail” a new tab will be opened with your notes pre-pasted into the main letter body. All that is left for you to do is select the appropriate e-mail address(es) and to make any desired modifications to the “Subject & Letter”. Going back to the “New Tab Page” you can trade bookmarks back and forth between the “All Bookmarks Section” and the “Bookmarks Toolbar Section”. Simply drag-and-drop as desired…but keep in mind that any changes made here will also be reflected in your “Bookmarks Toolbar & Other Bookmarks”. There is our bookmark freshly traded over to the “Bookmarks Toolbar Section”…looking very nice. Conclusion If you are tired of the default “New Tab Page” in Google Chrome then the Incredible StartPage extension will make for a refreshing change. Links Download the Incredible StartPage extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageAccess Google Chrome’s Special Pages the Easy WayReplace Google Chrome’s New Tab Page with Speed DialRegistry Hack to Set Internet Explorer Start PageMake iGoogle Your Startup Page in Microsoft Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10

    Read the article

  • Installing Ubuntu 10.04 to external HDD overwrites the MBR of the internal HDD

    - by zkrpar
    I have a Asus A42F laptop which has Windows 7 32 bit installed on it's internal HDD. I have just installed Ubuntu 10.04 on a portable HDD using the laptop. Now my laptop does not boot Windows 7 if the portable HDD is disconnected. I can only get the boot menu when the portable HDD is connected. The portable HDD does not boot when connected to another computer. Please help me, I want to: Boot Windows from the internal drive, without GRUB Boot Ubuntu from the external drive via the BIOS boot menu (F8 or F12)

    Read the article

  • How to boot Ubuntu 12.04-64bit from a USB from Compaq CQ58

    - by user208092
    I try to boot Ubuntu 12.04, 64-bit on my Compaq CQ58 laptop from a USB but it is not working. I've correctly installed the Ubuntu on my pen drive following the instructions on Ubuntu website. (http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) These are my BIOS settings: Post Hotkey Delay (sec) <0 CD-ROM Boot Internal Network Adapter Boot Network Boot Protocol Legacy Support Secure Boot Platform Key Enrolled Pending Action None Clear All Secure Boot UEFI Boot Order: USB Diskette on Key/USB Hard Disk OS Boot Manager Internal CD/DVD ROM Drive ! Network Adapter With these settings when i restart my computer what shows up is: Boot Device Not Found. This is what I get on the Boot Manager: Boot Option Menu OS boot Manager Boot From EFI File (Arrow Up) and (Arrow Down) to change option, ENTER to select an option. Press F10 to BIOS Setup Options, ESC to exit. PLEASE HELP... P.S. My laptop has windows 8

    Read the article

  • Mounting with Nautilus works but fstab gives "Host is down" error?

    - by Annan
    I'm connecting to my university's VPN so I can connect to the network drive. The VPN seems to be working fine and I can connect to the drive by typing the address into Nautilus and entering my login details: smb://139.___.___.140/home However, this fstab entry doesn't work: //139.___.___.140/home /media/___ cifs domain=CS,username==___,password=___,uid=sai,gid=sai 0 0 Nor does manually mounting it: sudo mount -t cifs //139.___.___.140/home /media/___ -o domain=CS,username=___,password=___,uid=sai,gid=sai,user The only error it gives is: mount error(112): Host is down Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) It's obvious the host isn't down since I can view the share from Nautilus. Why is Nautilus mounting it fine but not the normal mount command? What could cause this error?

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • Create bootable USB install image from command line?

    - by j-g-faustus
    I'm trying to create a bootable USB image to install Ubuntu on a new computer. I have done this before following the "create USB drive" instructions for Ubuntu desktop, but I don't have an Ubuntu desktop available. How can I do the same using only the command line? Things I've tried: Create bootable USB on Mac OS X following the ubuntu.com "create USB drive" instructions for Mac: Doesn't boot. usb-creator: According to apt-cache search usb-creator and Wikipedia usb-creator only exists as a graphical tool. "Create manually" instructions at help.ubuntu.com: None of the files and directories described (e.g. casper, filesystem.manifest, menu.lst) exist in the ISO image, and I don't know what has replaced them. (At my disposal is Mac OS X and Ubuntu server; I have neither Ubuntu desktop nor Windows.)

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by Bruce Tierney
    The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • Ubuntu 10.10 Netbook Remix on Asus Eee PC 701 4G - boot process hangs up

    - by Andrew
    I've got an old Eee PC 701 4G with the following specifications: 512 MB RAM 4GB SSD drive SM223AC 8GB SD card extension Screen resolution: 800 x 480 BIOS Revision 1101 (05/16/2008) EC Firmware version: EPC-079 Windows XP SP3 works fine on it, but I decided to switch my OS to Ubuntu. I have downloaded an Ubuntu 10.10 Netbook Remix ISO and wrote it to my FAT32 SD card using Universal-USB-Installer-1.8.3.3, as described on ubuntu.com During standard load from the SD card the boot process hangs up with black screen. If I'll press F6 while preloading Ubuntu, it sucessfully displays the boot menu, selecting language and showing 2 main commands: "Run ubuntu from USB drive" and "Insall Ubuntu". Selecting either of these commands leading to the same result - after some background work the main loading indicator is displayed ("Ubuntu" text with dotted progress bar under it), and it's progressing forever without any effect. Is Ubuntu 10.10 compatible with my Eee PC at all? How to boot it correctly?

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • Culture Shmulture?

    - by steve.diamond
    I've been thinking about "Customer Experience Management" lately. Here at Oracle, we arguably have the most complete suite of applications for managing the customer experience across and in the context of multiple channels -- from marketing to loyalty to contact center to self-service to analytics offerings, and more. And stay tuned, because in coming months let's just say we'll have even more to talk about on this front. But that said............ Last weekend my wife and I stayed at one of the premiere hotel chains on the planet. I won't name them, but we all know the short list. It could have been the St. Regis or the Ritz Carlton or Four Seasons or Hyatt Park or....This stay, at this particular hotel, was simply outstanding. Within a chain known for providing "above and beyond" levels of service, this particular hotel, under this particular manager, exceeded expectations on so many fronts. For example, at the Spa we mentioned to the two attendants that my wife is seven months pregnant and that we had previously had a lot of trouble conceiving. We then went to our room. Ten minutes later we heard a knock at the door and received a plate of chocolate covered strawberries with a heartfelt note and an inspiring quote, signed by the two spa attendees. The following day we arranged to have a bellhop drive us to the beach. Although they had a pre-arranged beach shuttle service with time limits, etc., he greeted us by saying, "I'm yours for the day until 4 p.m. Whatever you want to do is fine by me, as long as it's legal!" The morning that we left we arranged to have a taxi drive us to the airport--a nearly 40 mile drive. What showed up was a private coach complete with navy blue suited driver dude. And we were charged the taxi fare price. And there were many other awesome exchanges I won't mention here, although I did email the GM of this hotel two nights ago and expressed our effusive praise and gratitude. I'd submit that this hotel chain would have a definitive advantage using even more Oracle software to manage and optimize its customer interactions (yes, they are a customer). But WITHOUT the culture--that management team--and that instillation of aligned values across all employees of exemplifying 'the golden rule,' I wonder how much technology really matters in providing a distinctively positive and memorable customer experience. Lest you think I'm alone in these pontifications, have you read Paul Greenberg's blog lately? Have you seen one of his most recent posts? Now this SPECIFIC post is NOT about customer service per se. But it is about people. So yes, please think long and hard about the technology you seek to deploy. But never forget who will be interacting with your systems, and your customers.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >