Search Results

Search found 22380 results on 896 pages for 'hard drive failure'.

Page 44/896 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • External Hard disk not workng on windows after I formatted it on Ubuntu

    - by nav_jan
    I formatted my Western Digital 500 Gb hard disk on Ubuntu 10.4 and now I want to use it on Windows 7 but it cannot detect it. I formatted it with FAT(applicable to all) option. I tried to Google this problem a bit and as suggested by one of the site i tried to format this drive with NTFS. Still windows cannot detect it. Drives in windows 7 is not a problem because I tried a different usb drive on it and it works. I can see the led of the drive glow when I connect it and I can also see remove drive safely option in lower right corner, but i cannot see any option in "my computer" to access the hard disk. I am new to Ubuntu. Any help is appreciated.

    Read the article

  • New 2.5" hard drive for laptop - What to compare?

    - by TFM
    I'm having trouble finding a new (bigger) hard drive for my laptop. I came across some criteria that I never thought about before, while I was checking a price comparison site. Of course, that made me more confused. First of all, I will probably go with something above 250 GB, and at least 16 MB cache. Now the confusing part: Most new drives are 7200 RPM, as opposed to good old 5400 RPM. 7200 RPM used to mean extra heat, but suddenly it's almost impossible to find a 5400 RPM in 2.5". What did I miss? Second question: Internal data transfer rate. My old drive has a rate of around 60 MB, but new drives have values like 100 MB or more (e.g. 150 MB). How important is this "internal data transfer rate"?

    Read the article

  • Server 2008 Task Scheduler Mapped Drive Access C#

    - by user219313
    I'm trying do get Server 2008's Task Scheduler to run a C# console app which backs up data to a mapped backup drive somewhere on FastHosts network. I've written a test app which simply does this Directory.CreateDirectory("Z:\" + DateTime.Now.Ticks.ToString()); i.e. just creates a directory on the root of this Z drive. This works fine when I just run the .exe but when I schedule it in Task Scheduler it says the task has completed with return code 3762507597 - I can't find any info on what this means. I'm running the task with the highest Admin privelages as far as I can see.

    Read the article

  • USB flash drive serial number specification

    - by clyfe
    I retrieve a USB flash drive serial number by means of ioctl HDIO_GET_IDENTITY as described here. Yet, for some flash drives there is no serial (for example my SanDisk Cruzer). Why some drives don not return a serial number? a) HDIO_GET_IDENTITY not implemented in driver ? b) They just don't have one ? c) Other ? (what?) Is there a specification (like IEEE) that describes where and how the serial number is stored inside the flash drive?

    Read the article

  • Can a power failure or forceful shutdown damage hardware?

    - by Vilx-
    In an unrelated Internet forum I got into a discussion about hardware damage from forceful shutdowns (holding the power button for 5 seconds) and power failures. I was in the opinion that normal PC hardware does not suffer from this - after all, it's not much different than what they experience under a standard shutdown. But another person thought that it could do physical harm to the hard drive and possibly other components as well. He also said that the journaling features of filesystems are useless in face of power failures and were intended to help mitigate damage from system crashes. Now... I think this is nonsense, but then again I lack the experience and knowledge to say it with certainty. Perhaps someone else is more knowledgeable in this area and can shed light on this burning issue? :)

    Read the article

  • Booting from e-sata drive

    - by petersohn
    I have a HP EliteBook laptop (don't know exact product number), which has an internal hard drive with Windows installed on it. I have an external hard drive with e-sata and USB ports, linux installed on it. When I try to boot from the external drive, it works if I use USB but not if I use e-sata. In the BIOS setup, I have the following boot order set: External SATA drive USB Hard Drive Notebook Upgrade Bay Notebook Hard Drive etc. When I boot from another drive (such as the internal hard drive or from CD-ROM), and have the e-sata cable connected, it works perfectly. Is there any way to boot from the e-sata drive?

    Read the article

  • SMART: DISK FAILURE IS IMMINENT (under 24 hours?)

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista( I keep it just because of school). Everything was OK on both OSes,but one day on Ubuntu I was getting awkward noises from my notebooks's hard drive and then everything stops and I couldn't do anything. On Windows everything was ok. Everytime I boot on Ubuntu I can get 5 minutes of normal run, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI(Disk Utility, or something like that and from terminal). From the GUI I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks(or broken blocks, I had that test I while ago) on my HDD. From the terminal ( I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my linux apps on Windows. PLEASE HELP!!! How can I separate the bad blocks from Ubuntu so it wouldn't use them?

    Read the article

  • 12.04 doesn't boot anymore after a power failure

    - by Felix
    I'm a Windows user and I have no experience with Linux and Ubuntu. I installed Ubuntu 12.04 on my netbook (Asus 1215B) and everything works fine. Yesterday I ran the "update application" and updated over 120 "things" (I have no idea what exactly). After that I was asked to reboot, and I did. Ubuntu starts again and at the load screen with these 5 dots that normally begin to change color, it freezes. After 20 minutes I took out the battery to try another reboot (yes, not the the best idea), and now nothing happens. I boot from the HDD and I get an Error BOOTMGR is missing. I have important data on the hard drive. Is there an option to get this fixed? Or if not, to at least get the data from the hard drive? Ubuntu 12.04 64-bit Edit: it is ONLY Ubuntu on this Netbook, which uses the whole 500gb HDD as 1 Partition. Filesystem is NTFS. Whole Hardware seems okay. The USB drive which i used to instl the Os was formated in fat32

    Read the article

  • How to let hard drive sleep in RAID1 configuration?

    - by Al Kepp
    Normally in Windows 7 a hard drive stops spinning when it is not used for a longer while. This can be configured in Windows and I use it on computers which are turned on 24/7 but not used much often. My problem is on a computer with Intel X79 chipset with an integrated RAID controller. There is Windows 7 installed on an SSD drive, and there is RAID1 array with two SATA HDD drives for data. Those SATA drives aren't used much so I'd like to let them sleep (i.e stop spinning). But they ignore settings in Windows. How to let them sleep when using RAID1? It seems to me that those drives are "unstoppable", they are spinning 24/7 even when they aren't used at all. Maybe they would behave normally if I used Windows-based software RAID, but I use hardware RAID controller. Is there a way to let them stop spinning and sleep after for example 3 or 5 hours of inactivity (i.e. the same way as they would behave in Windows without RAID)?

    Read the article

  • smartctl not returning on HBA that's secure-erasing a different drive

    - by Stu2000
    Whenever I run smartctl -i /dev/sd* where * is a drive that is plugged into the same host bus adapter as another drive that is currently being erased with an hdparm secure erase command, the smart command will just 'hang' and not return (blocked) until the erasure of the other drive is finished. To make matters worse you can't cntrl-c out of it. Has anyone else had this issue? Is there another way to retrieve smart data from a drive, which doesn't block? I noticed that I can still use the udevadm command to retrieve the serial and model of the drive which is useful but doesn't appear to have any smart data. Any information relating to this matter is appreciated, especially if you can tell me another way to retrieve the S.M.A.R.T data that might work. Regards, Stuart

    Read the article

  • Installing Ubuntu 12 on SATA III drive

    - by Jared
    I am trying to install Ubuntu 12.04 on a SATA III drive however the installer will not recognize my drive in the guided (dual-boot) install. I have changed the controller from IDE to AHCI to no avail, the install still will only recognize my very small second drive that is plugged into a SATA II port. The thing is, the unguided install sees this drive just fine, I just am not sure enough of what I'm doing to feel safe installing via this method. Is there a fix for this beyond plugging my drive into a SATA II port? I really would like to avoid this because of my terrible cable management skills it would be a huge pain to switch it over.

    Read the article

  • How can I write directly to my Zune HD hard drive?

    - by iamgoat
    When syncing photos to the Zune HD it resizes them down to a much lower resolution which means I cannot load a high res picture on it (comic book) and zoom in to read it. This defeats the whole purpose of having a zoom feature. There is a registry hack you can make to get the Zune to display under My Computer. Then if you killed the zune process while it's syncing you'd be able to access it like a hard drive and copy files to it. It seems like the more recent firmware and/or Zune software version now prevents this. How can I treat it like an HDD and copy files to it? I simply want to take my original pictures folder and copy it over the low resolution versions the Zune software resized it to. An alternative option would be to remove the hard drive from it and see if I can connect it to a computer directly, but I just got this and don't want to disassemble it yet. Note to Microsoft: Why do you allow me to set the encoding quality of music, but not photos?

    Read the article

  • Google I/O 2012 - What's Possible with the Google Drive SDK

    Google I/O 2012 - What's Possible with the Google Drive SDK Nicolas Garnier Partners of Google Drive have already implemented a number of extremely compelling applications that use Google Drive for file storage. Implementing on the Google Drive SDK enables developers to distribute the cost of storage, while also removing the pain of reimplementing file management. In this session, we'll take a look at a number of existing Google Drive SDK implementations with popular apps. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 276 6 ratings Time: 56:25 More in Science & Technology

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video. Any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Can a OS be copied from one hard drive to another and still boot?

    - by AlexMorley-Finch
    Background My computer gets stuck on the make and model screen after the BIOS screen, aka the Toshiba screen. After some research I've realized that the problem is the hard drive. I'm using an old 250gb model that USED to be used for backup purposes, however I loaded windows 7 ultimate onto it This hard drive has trouble getting up to full RPM therefore cannot boot correctly until its warmed up. meaning that my pc needs to be restarted several times before it boots (once it took my 13 reboots to get my pc on!) From my research its either that, or lack of power supply, and I've tried multiple PSUs. Question I have my OS and all my files on this 250gb HDD... If I were to literally open the explorer, and copy EVERYTHING (including hidden files obviously) from this 250gb, to a spare 500gb I've got knocking about... Will it boot if I just copy everything? I cannot be bothered to load another OS onto my PC so if there is a way I can just copy the existing one over from one HDD to another and have it boot normally. This would be epic! I've heard about HDD cloning software. But before I purchase and/or download this software, I need to know if i can just copy the OS over through the windows explorer

    Read the article

  • Creating an install drive - can not open output file autorun.inf

    - by user226881
    I am trying to make an install/boot drive for a computer that has no operating system and no optical drive. I used the ISO from Ubuntu.com and the burner from pendrivelinux.com. When the program starts writing to the flash drive, there is an error displayed that says : "0 can not open output file E:\autorun.inf" But continues to write data. After it has finished, I remove the drive and insert to the other computer and turn it on, but it never finds a drive to boot from. What is causing this problem and how can I fix it ?

    Read the article

  • Ubuntu not saving files and settings when running from flash drive

    - by user81217
    How can I make Ubuntu run completely off of a flash drive? I have downloaded Ubuntu onto a 4gb flash drive but no changes I make are saved between sessions. I want to be able to run and save everything I do to the flash drive. I don't want it interfering with my hard drive at all. I just want to be able to plug my flash drive in the computer boot Ubuntu, and for it to save my changes. E.g. When I install Google Chrome, when I reboot it isn't there.

    Read the article

  • https://www.googleapis.com/drive/v2/files/<fileid>/comments?alt=json returned "Not Found" on a file that can't be opened

    - by Kartik Ayyar
    More details as below: https://www.googleapis.com/drive/v2/files/1iNMGIAFXuhS_CO_hnEO0_EJ9PAgT-hXYqWYv0MPGUTI/comments?alt=json returned "Not Found The file is present in drive and shows in drive.changes.list, but can't be opened in Google Drive either. There are two problems here a) the file is somehow corrupt ( it was a document imported into drive, so that failed, but that isn't something I care about for the purposes of this question ) b) The file shows up as existing in some API calls, but calls to read comments with the Drive SDK comments API fail. Here are results from an API call showing how the file does indeed exist: "file": { "kind": "drive#file", "id": "1iNMGIAFXuhS_CO_hnEO0_EJ9PAgT-hXYqWYv0MPGUTI", "etag": "\"o35FABD0TC3H-Up3OL3UA9kEB2w/MTM3MTc2NzU5NzEyNA\"", .... .... "iconLink": "https://ssl.gstatic.com/docs/doclist/images/icon_11_document_list.png", "title": "<removed>", "mimeType": "application/vnd.google-apps.document", "labels": { "starred": false, "hidden": false, "trashed": true, "restricted": false, "viewed": true },

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • NASM - Load code from USB Drive

    - by new123456
    Hola, Would any assembly gurus know the argument (register dl) that signifies the first USB drive? I'm working through a couple of NASM tutorials, and would like to get a physical boot (I can get a clean one with qemu). This is the section of code that loads the "kernel" data from disk: loadkernel: mov si, LMSG ;; 'Loading kernel',13,10,0 call prints ;; ex puts() mov dl, 0x00 ;; The disk to load from mov ah, 0x02 ;; Read operation mov al, 0x01 ;; Sectors to read mov ch, 0x00 ;; Track mov cl, 0x02 ;; Sector mov dh, 0x00 ;; Head mov bx, 0x2000 ;; Buffer end mov es, bx mov bx, 0x0000 ;; Buffer start int 0x13 jc loadkernel mov ax, 0x2000 mov ds, ax jmp 0x2000:0x00 If it makes any difference, I'm running a stock Dell Inspiron 15 BIOS. Apparently, the correct value for me is 0x80. The BIOS loads the hard drives and labels them starting at 0x80 according to this answer. My particular BIOS decides to load the USB drive up as the first, for some reason, so I can boot from there.

    Read the article

  • Google Drive API invalid_grant after removing access

    - by Sparafusile
    I have been writing a desktop application that uses the Google Drive API v2. I have the following code: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync ( new ClientSecrets { ClientId = ClientID, ClientSecret = ClientSecret }, new[] { DriveService.Scope.Drive }, "user", CancellationToken.None ) .Result; this.Service = new DriveService( new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = "My Test App", } ); var request = this.Service.Files.List(); request.Q = "title = 'foo' and trashed = false"; var result = request.Execute(); The first time I ran this code it opened a browser and asked me to grant permissions to the App, which I did. Everything worked successfully until I realized I was using the wrong Google account. At that point I logged into the wrong Google account and revoked access to my App. Now, whenever I run the same code it throws an exception: Error:"invalid_grant", Description:"", Uri:"" When I examine the service and request objects, it looks like the oauth_token isn't getting created any more. I know what I did to mess things up, but I can't figure out how to correct it so I can use a different Google account for testing. What do I need to do?

    Read the article

  • SSD Drive not being recongized in BIOS

    - by chobo2
    Well I bought my first drive Mushkin Chronos 180GB and got it installed in my computer and loaded up. I went to windows 7 and initialized the drive and then I installed "SSDlife Free" and loaded it up and my the SSD drive came up said it was "powered on 3 times"(thought it was odd but then thought maybe some testing???). I then restarted my computer and loaded into Acronis. Went to my SSD drive and make a partition called windows(made a basic logical partition). I then loaded up Norton ghost and wanted to copy my current windows onto the SSD drive on the partition I made found out I could not do it through the recovery disk so I made a backup of my windows drive and wanted to then restore it onto the SSD drive. Came back an hour later when the backup was done. I tried to restore the it on my SSD drive and could not find the partition so I loaded up Acronis again and it did not see it. I then went to the bios and saw only my other hard drive. What I tried Tried uplugging and replugging in both sata and power cables. Tried using the power and sata cable from the working drive and giving it the ones that SSD drive were using. Tried Sata AHCI Mode (Intel ICH9 Southbridge) Tried SATA PORT0-1 NATIVE MODE (Intel ICH9 Southbridge) Nothing worked. Software / hardware Windows 7 ultimate Gigabyte S-Series GA-P35-DS3L Mother board I hope someone has some ideas on why it is not being recognized.

    Read the article

  • Why are there hard faults when my RAM is not 100% used?

    - by Vilx-
    I've got 2GB of RAM and the resource monitor shows that it's only used about 75%. However there are some apps (NetBeans, Visual Studio) that every once in a while start making a lot of hard faults (up to and over 2000/min), thus predictably slowing down to a crawl. How is this so? The memory usage during these "fits" doesn't change. Perhaps it also includes memory mapped files or something?

    Read the article

  • Which Ubuntu-like Linux OSs work well on a flash drive?

    - by Evan Kroske
    I want a Linux OS that I can load on a flash drive, but I don't want to relearn an entire operating system. I want to know which tiny Linux installations are most like Ubuntu. For example, I'd like to use the apt-get package manager, the Gedit text editor, and the bash shell. I'd like to use something that's already popular, stable, and highly compatible, but it needs to fit comfortably in one gig of my four-gig flash drive (just the essentials; I'll use the remaining three gigs to store installed programs and files). I have no preference for window managers; I just want something small and fast that works like Ubuntu. What is the most popular Ubuntu-like OS that can be easily run on a thumb drive? Edit: I'm not sure I understand how this works. I don't to use a USB drive as a LiveCD; I want to plug in a USB stick and use the computer as if it was my own. In other words, I want to be able to install programs on the drive on one computer and use them on another. Do any of these OSs let me do that? Please forgive my ignorance.

    Read the article

  • All my folders and files on my flash drive have been renamed automatically and I can no longer open them... I need those files

    - by jennifer
    I opened up my flash drive this morning and all of my folders and files are normal, except for one folder and all its included files, which is the most important folder. The subfolders and files are renamed with bizarre characters and when I click to open them, a pop-up appears saying it's not accessible and the filename or directory name is incorrect. I don't want to reformat the flash drive because I'd lose all those files. Is there a way for me to restore it or something? I would attach a screen shot, but apparently new users do not have that privilege. If you have a vague idea of what I'm talking about, let me know and I can email you screenshots so you can have a better understanding. Any help is greatly appreciated!

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >