Search Results

Search found 15376 results on 616 pages for 'once'.

Page 215/616 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • DRS: Unknown JNLP Location

    - by Joe
    We are using Deployment Rule Sets to limit access to the older JRE to well-known applications like - but are running into a problem. One business critical applications has the following properties (*s to protect info): title: Enterprise Services Repository location: null jar location: http://app.*.com:52400/rep/repository/*.jar jar version: null isArtifact: true The application downloads a .jnlp file, and uses java web start to execute. Since the location is null, this application cannot be targeted by a location rule. And the certificate hash method only works when the application is cached (being ran more than once). If cache storing is off, which is the case in some situations, how can this application be targeted? Or at least told to run with an older JRE on start? This problem is specifically noted in this bug Thanks!

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • Visual Studio 2010 Service Pack 1, now available for download

    - by Harish Ranganathan
    Visual Studio 2010 Service Pack 1 (SP1) is now available for general download for almost a week now.  The Beta of SP1 came couple of months back and it did a lot of performance enhancements, added support for HTML5 tags and few other stuff related to web development.  Now, the final release of SP1 is available.  The good part is that, if you had installed the SP1 beta, you don’t have to remove the Beta and start all over again.  You can apply the final release on top of the Beta and it works like a charm. So, in simplified terms, what is new in Visual Studio 2010 SP1 Before I start listing it down, I was checking if there was an MSDN article available on this and found http://msdn.microsoft.com/en-us/library/gg442059.aspx  While it reads (Beta), the same holds good for the release candidate as well.  Unlike VS 2008 SP1 and .NET 3.5 SP1 (which came together), this release doesn’t add any new project templates/item templates. However, there are lot of enhancements related to Web Deployment, Debugging and Unit Testing for .NET 3.5 applications. So, how does one find if you are running the correct version of SP1 final release. While the SP1 Beta (Help – About Visual Studio) reads Microsoft Visual Studio 2010 Version 10.0.3118.1 SP1 Rel, once you install the SP1 RTM release, it should read as below The download link for SP1 Beta is here Cheers!!!

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by Brian Dayton
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage.   2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.."   3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what.   Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.  

    Read the article

  • How would I add a second physical hard drive to proxmox

    - by Cygnus X
    I installed proxmox on a single 250GB hard drive and I would like to add a second identical hard drive to put more VM's on. I already tried once, and didn't get very far. I added it and formatted it as an ext4, but when I went to use the disk, it said only 8GB was available. That's not quite right. So I did some searching and found that I had to make the device ID 8e for a linux lvm. After I did this, it said I had to restart, so I did... and it wouldn't boot!!! What did I do wrong? And how do I do it right? (I know I could throw in a RAID card and do a RAID 0, but I'd rather not).

    Read the article

  • I just ordered 70/10 internet service, and need a new router I think?

    - by data_jepp
    Before I had 25/5 service and the N standard router did just fine. Now it doesn't do the job. Online speedtest reads at 82 so I have the line. But my laptop is getting less than 30 in my room. My laptop has the following WiFi card: http://www.intel.com/content/www/us/en/wireless-products/centrino-advanced-n-6205.html What is this talk about 2.4 and 5GHz? Can my laptop be connected at once over both bands? And would that let me use the full 70Mb over Wi-Fi?

    Read the article

  • NetBSD as VMware workstation guest: `startx` hangs and maxes all CPUs utilization

    - by Howard Guo
    I am using VMware workstation 8. I have attempted to install and run NetBSD 5.1.2 and 6.0. Installations all went OK and the system was usable until I install a window manager. After installed xfce4, in NetBSD 5.1.2, I could startx and used xfce4 two times, however consecutive startx will hang and max all CPU to 100%. In NetBSD 6.0 RC2, I could not even start xfce4 once, startx hangs and max all CPU to 100%. I have tried to use both vmwlegacy and vmware device drivers, they don't help. I have also tried both 32bit and 64bit NetBSD, they behave in the same way. I also tried to catch the output of startx, however system was already hanging before the output gets flushed. Apparently no one else has encountered these troubles on Google search, did I miss any configuration piece? Any other suggestions please?

    Read the article

  • What, if anything, to do about bow-shaped burndowns?

    - by Karl Bielefeldt
    I've started to notice a recurring pattern to our team's burndown charts, which I call a "bowstring" pattern. The ideal line is the "string" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?

    Read the article

  • "Give with Bing" - Help raise money for Sports relief while searching for whatever you want

    - by Testas
    While Sport Relief drives fundraising by challenging people to do physical activities such as running a mile, we’re introducing the ‘Bing Search Mile,’ which gives people the ability to search using Bing and raise money for charity. For every 10 searches made, Bing.com will donate 5p to Sport Relief 2010, enabling you, and your friends and family, to raise money just by searching with Bing until the end of March. With the average mile taking about 10 minutes to run, in the same time, you can make up to 150 searches online - that’s 75p raised for a good cause per ‘search mile’. And while you’re at it,  why not step it up a gear and aim to complete a ‘Search Mile’ each day or even a ‘Search Marathon’ over the 5 week campaign with your colleagues, friends and family? How to get involved: 1.      Visit GiveWithBing.com and download the Official Sport Relief Bing Counter. Once downloaded, the Sport Relief counter will count all the searches you do on Bing from that point on.  2.      Now that you’re registered (and signed in), invite your friends, family, colleagues or classmates to join in the fundraising with you – GiveWithBing.com automatically generates an email explaining how it works for you to send them – the more people who search with you, the more money you raise. People can also register a school 3.      Run your ‘search mile’ every day and watch how your searches turn into life-changing cash for charity, with every 10 searches equalling 5p for Sport Relief. You can check your progress by visiting your individual page (more info here).  This is such a positive initiative and I challenge everyone in the UK to invite their key contacts to be part of Give with Bing.   Chris

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Where Is SilverLight Toolkit Installed On My PC?

    - by Gopinath
    This is first question that ran though my mind once I finished installation of SilverLight Toolkit today. When we install the toolkit, the installation wizard does not ask us for any installation folder options and after completion of installation there will not be any entries in to the All Programs section of start menu. After going through the documents, I found that installer silently places all the binaries, themes, samples documents under program files folder depending on the version of the toolkit. If you installed version 4.0 of the toolkit then it will be placed in the folder C:\Program Files\Microsoft SDKs\Silverlight\v4.0 Here is the list of other useful folder of SilverLight toolkit that we refer to often Bin  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Bin   Samples  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Samples   Themes  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Themes   Source  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Source Please note this above listed folder names will not be exactly same on your computer as they vary from one version to another. First open the base folder  C:\Program Files\Microsoft SDKs\Silverlight and then navigate through the available folders for locating the required ones. Hope this helps you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • PSU requirement question for my PC setup.

    - by user69474
    I understand that sometimes there may be a situation where the PSU is way more than required but in this case of mine, I'm not too sure. Sometimes when I play games, my computer will crash and restarts itself, 10 mins into the game. Once I received a message that says something like the power is overheating or something like that. Ok, so I have a 500W PSU. I have: 1x Internal DVD writer 1x SATA 250GB HD 1x Nvidia 8500 GT 2GB RAM. As I'm planning to get an additional 250GB SATA HD, I wonder if I need to increase my PSU as well -- in full knowledge of the previous crashes experienced before. Should I upgrade my PSU to 650W perhaps, or is that excessive?

    Read the article

  • Office communicator voice chat

    - by Gareth Simpson
    My company recently mandated a switch from Skype to Office Communicator for IM / voice chat. While Skype was never the be all and end all of VOIP, it was at least usable. With Communicator, if one person is talking, everyone else is basically silent (or as good as) so a normal conversation is impossible. No one can interrupt anyone else, if two people start talking at once it's a crap shoot who gets to be heard and often the person who is inaudible doesn't know it. There do not seem to be any client side settings to fix this. Is there anything that can be done server side or is it just rubbish?

    Read the article

  • Oracle IRM video demonstration of seperating duties of document security

    - by Simon Thorpe
    One thing an Information Rights Management technology should do well is separate out three main areas of responsibility.The business process of defining and controlling the classifications to which content is secured and the definition of the roles employees, customers, partners and contractors have when accessing secured content. Allow IT to manage the server and perform the role of authorizing the creation of new classifications to meet business needs but yet once the classification has been created and handed off to the business, IT no longer plays a role on the ongoing management. Empower the business to take ownership of classifications to which their own content is secured. For example an employee who is leading an acquisition project should be responsible for defining who has access to confidential project documents. This person should be able to manage the rights users have in the classification and also be the point of contact for those wishing to gain rights. Oracle IRM has since it's creation in the late 1990's had this core model at the heart of its design. Due in part to the important seperation of rights from the documents themselves, Oracle IRM places the right functionality within the right parts of the business. For example some IRM technologies allow the end user to make decisions about what users can print, edit or save a secured document. This in practice results in a wide variety of content secured with a plethora of options that don't conform to any policy. With Oracle IRM users choose from a list of classifications to which they have been given the ability to secure information against. Their role in the classification was given to them by the business owner of the classification, yet the definition of the role resides within the realm of corporate security who own the overall business classification policies. It is this type of design and philosophy in Oracle IRM that makes it an enterprise solution that works beyond a few users and a few secured documents to hundreds of thousands of users and millions of documents. This following video shows how Oracle IRM 11g, the market leading document security solution, lets the security organization manage and create classifications whilst the business owns and manages them. If you want to experience using Oracle IRM secured content and the effects of different roles users have, why not sign up for our free demonstration.

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Proper Use Of HTML Data Attributes

    - by VirtuosiMedia
    I'm writing several JavaScript plugins that are run automatically when the proper HTML markup is detected on the page. For example, when a tabs class is detected, the tabs plugin is loaded dynamically and it automatically applies the tab functionality. Any customization options for the JavaScript plugin are set via HTML5 data attributes, very similar to what Twitter's Bootstrap Framework does. The appeal to the above system is that, once you have it working, you don't have worry about manually instantiating plugins, you just write your HTML markup. This is especially nice if people who don't know JavaScript well (or at all) want to make use of your plugins, which is one of my goals. This setup has been working very well, but for some plugins, I'm finding that I need a more robust set of options. My choices seem to be having an element with many data-attributes or allowing for a single data-options attribute with a JSON options object as a value. Having a lot of attributes seems clunky and repetitive, but going the JSON route makes it slightly more complicated for novices and I'd like to avoid full-blown JavaScript in the attributes if I can. I'm not entirely sure which way is best. Is there a third option that I'm not considering? Are there any recommended best practices for this particular use case?

    Read the article

  • How to prevent Outlook from receiving multiple copies of the same email

    - by martani_net
    This question might be asked already, but it's different from this one. My boss has Outlook 2003, when he synchronize his emails while connecting through the local server (using Exchange I guess) he gets his emails normally. Once he is outside (not connection to our LAN) he gets each email duplicated 3 or 4 times. Does anyone experienced this before? and how can we fix this? Please no links to FAQ pages. For info, we are using Kaspersky anti virus and Windows 2003 server, with windows XP clients. [Update] Actually we have a bunch of 5 or 6 email accounts on Outlook, and only one of them recieve duplicated copies of the same email, all the others are cool. Further more all these email are using the sae service, Gmail for example. [Update 2] I just found out that Outlook is configured to remove emails from the server already, also, some emails exceeds 5 copies! Thank you

    Read the article

  • How should code reviews be Carried Out?

    - by Graviton
    My previous question has to do with how to advance code reviews among the developers. Here I am interested in how a code review session should be carried out, so that both the reviewer and reviewed are feeling comfortable with it. I have done some code reviews before and the experience has been very unpleasant. My previous manager would come to us --on an ad hoc basis-- and tell us to explain our code to him. Since he wasn't very familiar with the code base, whenever he would ask me to explain my code, I'd find myself spending a huge amount of time explaining the most basic structure of my code. As a result, each review would last much too long, and the process would leave both of us exhausted. Once I was done explaining my work, he would continue by raising issues with it. Most of the issues he raised were cosmetic in nature ( e.g, don't use region for this code block, change the variable name from xxx to yyy even though the later makes even less sense, and so on). After trying this process for few rounds, we found the review session didn't derive much benefits for either of us, and we stopped. How would you go about making each code review a natural, enjoyable, thought stimulating, bug-fixing and mutual-learning experience? Also, how frequently you do your code reviews - as soon as the code is checked in? Do you allocate a fixed time every week to do this? What are the guidelines that you follow during your code reviews?

    Read the article

  • Calculating Screen Resolutions Using WPF

    - by Jeff Ferguson
    WPF measures all elements in device independent pixels (DIPs). These DIPs equate to device pixels if the current display monitor is set to the default of 96 DPI. However, for monitors set to a DPI setting that is different than 96 DPI, then WPF DIPs will not correspond directly to monitor pixels. Consider, for example, the WPF properties SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth. If your monitor resolution is set to 1024 pixels wide by 768 pixels high, and your monitor is set to 96 DPI, then WPF will report the value of SystemParameters.PrimaryScreenHeight as 768 and the value of SystemParameters.PrimaryScreenWidth as 1024. No problem. This aligns nicely because the WPF device independent pixel value (96) matches your monitor's DPI setting (96). However, if your monitor is not set to display pixels at 96 DPI, then SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth will not return what you expect. The values returned by these properties may be greater than or less than what you expect, depending on whether or not your monitor's DPI value is less than or greater than 96. Since the SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth properties are WPF properties, their values are measured in WPF DIPs, rather than taking monitor DPI into effect. Once again: WPF measures all elements in device independent pixels (DIPs). To combat this issue, you must take your monitor's DPI settings into effect if you're looking for the monitor's width and height using the monitor's DPI settings. The handy code block below will help you calculate these values regardless of the DPI setting on your monitor: Window MainWindow = Application.Current.MainWindow; PresentationSource MainWindowPresentationSource = PresentationSource.FromVisual(MainWindow); Matrix m = MainWindowPresentationSource.CompositionTarget.TransformToDevice; DpiWidthFactor = m.M11; DpiHeightFactor = m.M22; double ScreenHeight = SystemParameters.PrimaryScreenHeight * DpiHeightFactor; double ScreenWidth = SystemParameters.PrimaryScreenWidth * DpiWidthFactor; The values of ScreenHeight and ScreenWidth should, after this code is executed, match the resolution that you see in the display's Properties window.

    Read the article

  • Where can I compare monitors with a given VESA mount?

    - by Dan Rasmussen
    I am looking into purchasing a dual-monitor setup, and need to purchase two monitors with VESA MIS-D mounts. My only problem is that that information doesn't seem to be readily available on most shopping websites. Neither Amazon nor Newegg seem to have the information searchable or filterable. I could shop for monitors, then Google around to see if they support VESA MIS-D, but is there a better way? Is there a resource (not necessarily a store - once I find a monitor I can shop elsewhere) where I can browse a variety of monitor specs and reviews while only looking at monitors with a certain VESA mount?

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • hplip gui required plugin

    - by Terence Stamp
    I downloaded hplip gui to manage my printer, but in order to set it up correctly, you must click the green puzzle piece labeled "install required plugin." Once you do, you are presented with two options: download it from HP's server or locate the file locally on your hard disk. In the past, I have had success with downloading it from HP's server. Currently, my luck is not as good. My question is simple. Where can I find the plugin on the Internet so that I might download it and install it using the second option of installing from my hard drive?

    Read the article

  • Embedded linux Development learning

    - by user1797375
    I come from a windows background and i am proficient with the .net platform. For work, i need to bring up a custom embedded system platform. We have bought the pandaboard ES as the test platform. The application is to stream images over the wifi. If you think about it, we are building something similar to a netgear router - the only difference being when you log into the device it serves images. Because my background is in windows i am not quite sure how to start off with embedded linux development. in reading through various sites i have come to the conclusion that going to linux as development host is the best option. Can some one point to me in the right direction regarding the set up. I have a windows machine that will be used for development purposes. I can either do a virtual box or setup a partition for linux. But the finer details are what throwing me off..what i need to know is 1) once i install linux what other software do I need - Code blocks, 2) what about toolchain 3) How to debug - through serial port ? 4) Is there a way to send the image built directly to the CF card? Thanks

    Read the article

  • What would happen in a Software Raid 1 of one HDD and one SSD?

    - by Adrian Grigore
    Hi, I'm running my Windows 7 installation and all of my apps from an SSD for performance reasons. Since SSD's can instantly die at any moment, I'm looking for some kind of data backup strategy. Right Now I regularly backing up the drive image on a hard disk, but that only happens once per day, which is not enough for my taste. So I got an idea: What if I created a software raid 1 of the SSD and partition on my Hard disk? All data would be mirrored on both drives, making this a lot safer. But what about performance? Will Windows 7 detect that the SSD is faster than the hard drive and always read from the SSD? Or will it randomly read from both, thus reducing read performance? Thanks, Adrian Edit: I just found this article which basically answers my question. Feel free to close this post.

    Read the article

  • Read-only filesystem Recovery Mode not working

    - by purbleguy
    I have seen other posts of this before, but they didn't help. In short, today I was trying to play Colobot on my Ubuntu Trusty computer, when I tried to access the directory the game was in by terminal, bash warned me that the disk was in a read-only state. I'm like, ok... So I reboot and go into recovery mode, there I do fsck, it finds errors, but apparently fails to fix them. At that point I was getting annoyed and searched the internet, once I found an answer I ran the grub and dpkg options in recovery mode, recovery mode said it was read/write, but when I boot in, I get the same thing, read-only. So I reboot into recovery mode, and tada! It's read-only again. I can't think of anything else to do, as the other people who had the same problems had them fixed by the steps I did. I got all my important files backed up to both a seperate partition and a seperate computer, so no worries there. I just need help getting this to work, as my computer might as well be a brick if I cant do f/a on it

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >