Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 20/1640 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Firmware/driver for Broadcom wifi card, PowerBook G4 running Ubuntu 12.04 [duplicate]

    - by user107831
    This question already has an answer here: How to Install Broadcom Wireless Drivers 40 answers This is my first time working with Ubuntu (or Linux), so please be patient. I am running Ubuntu 12.04-powerpc "Precise Pangolin" on a Mac PowerBook G4 with 1.67GHz processor. The firmware/driver for the wifi card is missing. For reasons not worth explaining, I cannot physically plug the computer into the network. I have another computer, a MacBook Pro running OSX, from which I can download files and port them by USB thumb drive. The wifi card in the PowerBook G4 is by Broadcom. The chip is BCM4306, rev. 3. The PCI number is 14e4:4302. I have downloaded b43-fwcutter_015-14_powerpc.deb and dropped it into the Home folder on the Ubuntu machine. However, it will not install. When I double-click, it opens with Ubuntu SoftwareCenter, but the "Install" button is inactive: I can't click it. There's a message beside the inactive button saying, "An older version of 'b43-fwcutter' is available in your normal software channels. Only install this file if you trust the origin." If I "right-click" the .deb file and open with Archive Manager, it shows me the "DEBIAN" and "usr" folders, but I'm unsure what to do from there...and fairly certain this is not the right way to do things. Maybe I have the wrong version of b43-fwcutter for my machine/version of Ubuntu? The documentation for this problem is a mess. It refers to all sorts of out-of-date Ubuntu versions and to an array of different "cutter" and firmware files. Maybe I'd be able to figure this out if I were a more seasoned Ubuntu user, but I have no idea why Sofware Center won't let me do the install. I would be VERY grateful for an explanation of how to get the wifi card working on this machine again. Thank you!

    Read the article

  • libgdx loading textures fails [duplicate]

    - by Chris
    This question already has an answer here: Why do I get this file loading exception when trying to draw sprites with libgdx? 4 answers I'm trying to load my texture with playerTex = new Texture(Gdx.files.internal("player.jpg")); player.jpg is located under my-gdx-game-android/assets/data/player.jpg I get an exception like this: Full Code: @Override public void create() { camera = new OrthographicCamera(); camera.setToOrtho(false, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); batch = new SpriteBatch(); FileHandle file = Gdx.files.internal("player.jpg"); playerTex = new Texture(file); player = new Rectangle(); player.x = 800-20; player.y = 250; player.width = 20; player.height = 80; } @Override public void dispose() { // dispose of all the native resources playerTex.dispose(); batch.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(playerTex, player.x, player.y); batch.end(); if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 50 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 50 * Gdx.graphics.getDeltaTime(); }

    Read the article

  • How to create basic Adobe Illustrator files programatically?

    - by Jonas Follesø
    I need to create a really basic Adobe Illustrator file on the clipboard that I can paste in Adobe Illustrator or Expression Design. I'm looking for code samples on how to programaticaly generate Adobe Illustrator Files, preferably from C# or some other .NET language (but at the moment any language goes). I have found the Adobe Illustrator 3 File Format documentation online but it's allot to digest for this simple scenario. I don't want to depend on the actual Adobe Illustrator program (COM interop for instance) to generate my documents. Must be pure code. The code is for an Expression Studio addin, and I need to be able to create something on the clipboard I can paste into Expression Design. After looking at the formats Expression Design puts on the clipboard when copying a basic shape I've concluded that ADOBE AI3 i the best one to use (the others are either rendered images, or cfXaml that you cannot paste INTO Design). So based on this I can't use SWG which would probably been easier. Another idea might be to use a PDF component as the AI and PDF format is supposed to be compatible? I'm also finding some references to a format called "Adobe Illustrator Clipboard Format" (AICB), but can't find allot of documentation about it.

    Read the article

  • File Handler returning garbled files.

    - by forcripesake
    For the past 3 months my site has been using a PHP file handler in combination with htaccess. Users accessing the uploads folder of the site would be redirected to the handler as such: RewriteRule ^(.+)\.*$ downloader.php?f=%{REQUEST_FILENAME} [L] The purpose of the file handler is pseudo coded below, followed by actual code. //Check if file exists and user is downloading from uploads directory; True. //Check against a file type white list and set the mime type(); $ctype = mime type; header("Pragma: public"); // required header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: private",false); // required for certain browsers header("Content-Type: $ctype"); header("Content-Disposition: attachment; filename=\"".basename($filename)."\";" ); header("Content-Transfer-Encoding: binary"); header("Content-Length: ".filesize($filename)); readfile("$filename"); As of yesterday, the handler started returning garbled files, unreadable images, and had to be bypassed. I'm wondering what settings could have gone awry to cause this.

    Read the article

  • Vbscript - Checking each subfolder for files

    - by Kenny Bones
    Ok, this is a script that's supposed to basically mirror two sets of folders. I managed to get it to work, but it seems like it didn't check each subfolder. So, I added these lines to the code: Set colFolders = objFolder.Subfolders Then I added this: For each subFolder in colFolders For each objFile in colFiles Dim DateModified DateModified = objFile.DateLastModified ReplaceIfNewer objFile, DateModified, strSourceFolder, strDestFolder Next Next This however does not seem to be working. This is the full code below. Dim strSourceFolder, strDestFolder Dim fso, objFolder, colFiles, colfolders strSourceFolder = "c:\users\vegsan\desktop\Source\" strDestFolder = "c:\users\vegsan\desktop\Dest\" Set fso = CreateObject("Scripting.FileSystemObject") Set objFolder = fso.GetFolder(strSourceFolder) Set colFiles = objFolder.Files Set colFolders = objFolder.Subfolders For each subFolder in colFolders For each objFile in colFiles Dim DateModified DateModified = objFile.DateLastModified ReplaceIfNewer objFile, DateModified, strSourceFolder, strDestFolder Next Next Sub ReplaceIfNewer (sourceFile, DateModified, SourceFolder, DestFolder) Const OVERWRITE_EXISTING = True Dim fso, objFolder, colFiles, sourceFileName, destFileName Dim DestDateModified, objDestFile Set fso = CreateObject("Scripting.FileSystemObject") sourceFileName = fso.GetFileName(sourceFile) destFileName = DestFolder & sourceFileName if fso.FileExists(destFileName) Then Set objDestFile = fso.GetFile(destFileName) DestDateModified = objDestFile.DateLastModified if DateModified <> DestDateModified Then fso.CopyFile sourceFile, destFileName End if End if if Not fso.FileExists(destFileName) Then fso.CopyFile sourceFile, destFileName End if End Sub

    Read the article

  • Preserve name of file using cURL to transfer files

    - by Toby
    I'm transferring files from an existing http request using cURL like so... $postargs = array( 'nonfilefield' =>'nonfilevalue', 'fileentry' => '@'.$_FILES['thefile']['tmp_name'][0] ); $ch = curl_init('http://localhost/curl/rec.php'); curl_setopt($ch,CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($ch,CURLOPT_RETURNTRANSFER, true); curl_setopt($ch,CURLOPT_POST,TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS,$postargs); curl_exec($ch); curl_close($ch); The only way I can get this to work is using the tmp_name, without this it won't send. However, I then lose the name value for when I want to name the file later. Is there some way to do this preserving the $_FILES array as it normally would be without curl? I'm also using an array of file fields in my script, so at the moment I have to convert my multidimensional array into a single dimension for this to work

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

  • Dependencies lib32asound2 [duplicate]

    - by The Mini John
    This question already has an answer here: 'teamviewer depends on (…)' while trying to install TeamViewer 4 answers I was trying to install Teamviewer, but i was getting a dependencies error. I tried to install them but with no luck.. I think Mod's are not reading the questions trough when they mark as duplicate I'm getting this Error: Unpacking teamviewer (from teamviewer_linux_x64.deb) ... dpkg: dependency problems prevent configuration of teamviewer: teamviewer depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer depends on lib32z1; however: Package lib32z1 is not installed. teamviewer depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer (--install): dependency problems - leaving unconfigured Errors were encountered while processing: teamviewer I tried sudo apt-get -f install but getting Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 Package lib32asound2 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lib32asound2' has no installation candidate E: Package 'ia32-libs' has no installation candidate i cant even get to the sudo dpkg -i teamviewer_linux_x64.deb If i force installation sudo dpkg --force-depends -i teamviewer_linux_x64.deb Although it's "Setting up Temviewer" it gives me this I'm fairly new to ubuntu, can anyone help me out ? I'm on Ubuntu 13.10

    Read the article

  • Web Hosting Advice for Project [duplicate]

    - by Lea Hayes
    Possible Duplicate: How to find web hosting that meets my requirements? I am working on a project that will be released as open source in the latter part of the year. I am starting to think about how the accompanying website will be hosted and would greatly appreciate some advice. Requirements: Domain #1 Information about the project itself (just pages and pictures). Documentation / Wiki Forums Download of project source (approx 3MB archive) Download of various themes and community contributed content (est. sizes 10KB ~ 512KB). Domain #2 Primary company website that offers products and services. This will be primarily pictures and pages. What kind of web hosting would be best for a project like this. I am working on a very tight budget and can only afford to spend up to £250 per year for hosting this. I was considering using some sort of VPS hosting. I found the following companies which seem to offer around this price range, but they have very mixed reviews. http://www.webhosting.uk.com/ http://www.eukhost.com/ Godaddy UK uk2 . net My company is based in the UK, how important is it for me to use UK based hosting? There are plenty of overseas hosting companies that are considerably cheaper. When it comes to bandwidth, how many downloads will bandwidth: 100GB get me? Any advice would be very greatly appreciated!

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • Google indexed the same page under two URLs (despite rel-canonical)

    - by unor
    The Super User question "Playing mp3 in quodlibet displays “GStreamer output pipeline could not be initialized” error" is indexed under two URLs in Google: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058 The first one is the canonical one; the corresponding rel-canonical is included in both pages: <link rel="canonical" href="http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia" /> Google also indexed http://superuser.com/a/652058, which redirects to the answer: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058#652058 Now, the second URL from above is the same as this one minus the fragment #652058. So Google seems to strip the fragment, which results in exactly the same page under another URL (= containing the answer ID /652058 as suffix), and indexes it, too -- despite rel-canonical and duplicate content. Shouldn’t Google recognize this and only index the canonical variant? And what could be the reason why Stack Exchange includes the answer ID in the URL path, and not only in the fragment (resulting in various URL variants for the same page)?

    Read the article

  • SEO consideration for duplicate sites

    - by Malk
    I am building a brochure-ware website for a company that sells products all across the world. They need the site to ask the user what region they are in before using the site; there are 5 regions. This is because there are different products offered to different regions and each region may or may not want to customize their own content. However, at launch and likely forever, most of the pages will be the exact same minus what is listed in the footer and in the product selection menu. My question is how should I structure the sitemap for this site for best SEO? Should I be concerned with duplicate content penalties and/or cannibalizing the site's presence on the SERP? Some considerations: The client wants to be able to print links directly to regional specific content bypassing any prompt for the user to select a region (to ensure they land on the target page). The client cannot have a 'default' region so the user must have a region specified "Clean" urls are important, but there is wiggle room The client does not want each region to have its own domain There will be a link on the page to allow users to specify a different region The client is not concerned with localization ...at this time Some products are available in multiple regions A quick list of options I am considering: www.site.com/region/page region.site.com/page www.site.com/page?region (no cookie, pages require the parameter. If visited without; the user must select a region) www.site.com/page (using cookie and a splash screen if needed; could pass parameter in to set the region for direct linking) Thanks in advance for your advice.

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • How can I change the file name of the Program Files (x86) folder?

    - by Madrauk
    Let me stop you before the "you shouldn't do that" and "you will corrupt your file paths". I know what I'm doing, but I'll give you the story to convince you. Basically, my hard drive is failing to the point where programs installed on it are not responding consistently. So, in preparation for my replacement, I'm moving as many files as possible to my secondary because the new drive will be smaller (it's an emergency I can't buy a fancy expensive new drive) and so I can actually use my computer until the new drive comes in. The basic idea behind what I'm doing is I've copied the contents of the Program Files x86 folder to another spot on my other drive, and I want to replace the original folder in the C drive with a symbolic link that points to the other drive, so the programs can run from that drive and be fine but it will save space on my C drive and simplify the moving process when the new one comes in. To do that, I need to rename the program files folder, make the symbolic link, hopefully delete the program files folder, then restart my computer so all the programs are running properly and I can confirm it works. Now that I've told you why, can anyone help me out?

    Read the article

  • Micorsoft office excel in ubuntu [duplicate]

    - by user204653
    This question already has an answer here: Is it possible to run Microsoft Office 2010? 9 answers some of my files are not working in libre office and require Microsoft office please help me in installing that, i have Ubuntu 12.04, 786 MB RAM, 3.0 GHZ pentium dual processor

    Read the article

  • Generating 8000 text files from xml files

    - by Ray
    Hi all, i need to generate the same number of text files as the xml files i have. Within the text files, i need the title and maybe some other tags of it. I can generate text files with the elements i wanted but not all xml files can be generated. Only some of them are generated. Something might be wrong with my parser so help out please thanks. This is my code. Please have a look and give me suggestions. Thanks in advance. import java.io.File; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.*; import java.io.*; public class AccessingXmlFile1 { public static void main(String argv[]) { try { //File file = new File("C:\\MyFile.xml"); // create a file that is really a directory File aDirectory = new File("C:/Documents and Settings/I2R/Desktop/test"); // get a listing of all files in the directory String[] filesInDir = aDirectory.list(); System.out.println(""+filesInDir.length); // sort the list of files (optional) // Arrays.sort(filesInDir); //////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////// // have everything i need, just print it now for ( int a=0; a<filesInDir.length; a++ ) { String xmlFile = filesInDir[a]; String newLine = System.getProperty("line.separator"); File file = new File(xmlFile); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document document = db.parse(file); document.getDocumentElement().normalize(); //System.out.println("Root element " + document.getDocumentElement().getNodeName()); NodeList node = document.getElementsByTagName("metadata"); System.out.println("Information of Xml File"); System.out.println(xmlFile.substring(0, xmlFile.length() - 4)); //////////////////////////////////////////////////////////////////////////////////// String titleStoreText = ""; String descriptionStoreText = ""; String collectionStoreText = ""; String textToWrite = ""; //////////////////////////////////////////////////////////////////////////////////// for (int i = 0; i < node.getLength(); i++) { Node firstNode = node.item(i); if (firstNode.getNodeType() == Node.ELEMENT_NODE) { Element element = (Element) firstNode; NodeList titleElementList = element.getElementsByTagName("title"); Element titleElement = (Element) titleElementList.item(0); NodeList title = titleElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(titleElement == null) titleStoreText = " There is no title for this file."+ newLine; else titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue() + newLine; //titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Title : " + titleStoreText); NodeList collectionElementList = element.getElementsByTagName("collection"); Element collectionElement = (Element) collectionElementList.item(0); NodeList collection = collectionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(collectionElement == null) collectionStoreText = " There is no collection for this file."+ newLine; else collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue() + newLine; //collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Collection : " + collectionStoreText); NodeList descriptionElementList = element.getElementsByTagName("description"); Element descriptionElement = (Element) descriptionElementList.item(0); NodeList description = descriptionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(descriptionElement == null) descriptionStoreText = " There is no description for this file."+ newLine; else descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Description : " + descriptionStoreText); //////////////////////////////////////////////////////////////////////////////////// textToWrite = "=====Title=====" + newLine + titleStoreText + newLine + "=====Collection=====" + newLine + collectionStoreText + newLine + "=====Description=====" + newLine + descriptionStoreText;// + newLine + "=====Subject=====" + newLine + subjectStoreText; //////////////////////////////////////////////////////////////////////////////////// } } ///////////////////////////////////////////write to file part is here///////////////////////////////////////// Writer output = null; File file2 = new File(xmlFile.substring(0, xmlFile.length() - 4)+".txt"); output = new BufferedWriter(new FileWriter(file2)); output.write(textToWrite); output.close(); System.out.println("Your file has been written"); //////////////////////////////////////////////////////////////////////////////////// } } catch (Exception e) { e.printStackTrace(); } } }

    Read the article

  • Fixing corrupt AVG vault? All files in USB drive are locked out.

    - by Kelsey
    I was doing a virus scan on an external USB drive and while AVG was scanning my system got locked up and required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directorys but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. If I show hidden files there is a hidden AVG directory that I know was not there to begin with and I am assuming it is some type of vault to protect files while being scanned. Well not the entire drives contents are unaccessible because I think whatever does the managing of the scan failed during the roobt and left the headers or something in a corrupt state. Does anyone know how to 'unlock' or recover this data? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • .php files see Apache environment variables, .html files don't

    - by dotancohen
    Consider this environment: $ cat .htaccess AddType application/x-httpd-php .php .html SetEnv Foo Bar $ cat test.php <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> $ cat test.html <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> This is the output of test.php: Hello BarBarBar This is the output of test.html: Hello Why might that be? How might I fix it? Here is phpinfo.php: http://pastebin.com/rgq7up61 Here is phpinfo.html: http://pastebin.com/VUKFNZ36 If anyone knows where I can host a real webpage instead of just the HTML for one, please let me know and I'll move the content to there. Thanks.

    Read the article

  • How important is it to install on the program files folder?

    - by eran
    In a proper installation of an average software, its executables would be in the program files folder; its user data in the user's application data folder; it's non user specific data in the all users application data folder; and it should usually be able to run under non-administrative privileges. These guidelines could easily be ignored on XP, but they are an issue on Vista and 7 due to UAC. We're on the verge of releasing a major version of our software. It's a CMS, used by our clients as their main work tool, and their IT staff are well familiar with it. If we want to be fully compatible with Windows 7, we have to make quite a few changes, and we're already on a tight schedule. Question is: we can easily have our clients install our software outside of program files, or have them run it as administrators. I think it's wrong, but I need some ammunition: why should we install on program files, with all the limitations that come with it?

    Read the article

  • Where is the best location to keep shared-developer website files in the linux hierarchy?

    - by Tchalvak
    I just started hosting files for a website on my server, and I'm not sure where is an appropriate place to keep them. At the moment, I have them in /var/www/name.of.virtualhost.site/www/. That's obviously not secure because anything below the final public /www/ folder is also available since the /var/www/ contents are already being served up. For example, /var/www/name.of.virtualhost.site/docs/site_policies.txt is accessible via something like defaultsite.com/name.of.virtualhost.site/docs/site_policies.txt. So where is a good place to store the files that make up a website? (when it's a site that only I'm developing, I can obviously just stick them in /home/my_username/sites/name.of.virtualhost.site/, but that doesn't work well when I want other developers to be working on the site's files as well) I'm running a LAMP stack, not that I expect it to matter.

    Read the article

  • Files hidden on USB hard disk because of virus, how to clean?

    - by hammad
    I have 1 GB hard drive. First all of its contents were gone after a virus got in (when I have the drive to someone else). After research found that it was a virus may be. I run Nortan antivus and removed certain virus. Then ran malewarebytes, now ad-aware and avg 2011 antivirus. Now i could see the files on my other computer. But now I have hooked this to my new macbook pro (windows installation) and i can't see the files. The antivurs does find all the files. MacOSX wrote a 400 MB directory there which i can see but nothing else. How to fix this? Which program will fix it?

    Read the article

  • Does Win 7 still requires copying all files over before burning to a DVD-R or BD-R?

    - by Jian Lin
    It seems that Win 7 still needs to copy all files over to a folder, before it burns all files to the DVD-R or BD-R? I think since XP or Vista, Windows always copy everything over to a temporary folder before it will burn to an empty DVD-R. So if you just want to burn a 4GB file to an empty DVD-R, it will first make a copy of that file, and then burn it, instead of just burning it without making a copy first. And now on Win 7, it seems like it is the case also? Most other 3rd party burning tools won't make an extra copy of the files first... Win 7 is the exception. Is there a way around it? (to avoid copying over 25GB or 50GB of data before burning)

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >