Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 536/2452 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Windows Search Indexing SSD

    - by user654628
    I just bought a new SSD (Vertex4 from OCZ, 256G) and installed Windows 8 with it on my laptop. I am not using an external hard drive to keep extra data (paging, temp files etc) because I am using a laptop and do not want to carry it around with me. My question is, if I disable Windows indexer (Windows Search service), does Windows still search files under the search (Windows 7 Start menu search/Metro UI Windows 8 search)? Since the indexer was meant to search for files by indexing them, does this mean that Windows will not search new files? Thanks in advance.

    Read the article

  • CDN recommendation

    - by michaeld79
    Hey all, I am looking for a CDN service that is able to update the end point files on demand via API in max time of 10 min. or an expiration time for the files that is 10 min or less. In addition the CDN must have an option to upload files via API (working with PHP in my project). thanks in advance michael D

    Read the article

  • Javascript loading never completes on many sites

    - by Joe
    I recently moved country and have found that on many websites the page never finishes loading. In some cases, no content is ever displayed, but the loading will never time out. Loading Developer Tools in Chrome shows me that it is the Javascript files which never load. For example, this BBC article will never load compatability.js, though will load all the other JS files perfectly. Google Maps often fails to finish loading, meaning it's impossible to make searches. There seems to be no pattern to which files will fail to load (i.e. they don't come from the same CDN). I have tried Chrome, Safari and Firefox on OSX 10.8, and Chrome on my girlfriend's OSX 10.7. I have similar issues on the iPad. In many cases, if I can go to the mobile version of the page that seems to load fine. I have run the browsers in private mode, disabled plugins, updated flash, cleared the cache, flushed the DNS cache - though it would seem that if this is happening on other devices, none of this would work anyway. Is this an ISP issue? And if so, why would it be limited to certain JS files and not all? JS files from the same domain work fine, so I'm not really sure what I should be looking for.

    Read the article

  • terminal server 2003, changed location of user profile "Desktop" items

    - by Bamse
    have a terminal server 2003 that about 50users can use, users are saving documents on the desktop, today users complain that documents on the desktop is missing, no upgrades or changes has been done the error is that the server reverted to load items the folder name "Desktop" instaed of using the local language "Skrivbord" (swedish for desktop) So the files are still on the server located under the users swedish folder name, but server does not load it, server however does load files located under the English folder name how can the terminal server from one day to another just change from where it loads the user profile desktop files?

    Read the article

  • No compatibility tab for Devenv.exe (VS 2010 and VS 2012) on Windows 8

    - by mkinkade
    I've tried checking the "Run as Administrator" on the shortcut, but that doesn't always seem to work, like when I open the solution through the jumplist. I browsed to the devenv.exe file, but when I open the properties for the file the Compatibility tab is not there. It is there for other executables in the same directory. Does anyone know how I can get the tab back so that I can set the Run as Administrator open on the executable?

    Read the article

  • Proving file creation dates

    - by Nils Munch
    In a weird case surrounding copyrights of a software system I have developed, I use the fact that I have all the source files of the system in question, created long before I joined the company that claims to own the system. The company being sued by yours truely says that I have simply manipulated to files to appear to be from that date. Is it even possible to fake or manipulate creation dates ? And if so, how can I "prove" that the files really are that old ? Luckily, I stored my project on GitHub, whick confirmed the fact that the files are from that era, but that is besides the point. I run purely Apple OS X.

    Read the article

  • Which linux distro to use? Hyper-V hosting.

    - by TomTom
    Not a linux geek I am looking for a recommendation which Linux distro to use for a hyper-v based hosting envfironment (so access to the enlightment part easily is important). I Would also love to have something that alloows me to split operating system read only files and user files easily without too much tinkering onto two discs, so that the boot disc can be read only. (reasoning: This would allow me to set up a read only disc that is shared between multiple server instances, with the server disc only containing basically the user files)

    Read the article

  • Windows 7 Explorer shortcut for "Replace All"

    - by chris
    I you copy/paste some files into a directory that already contains files with the same name you'll get a confirmation dialog "There is already a file with the same name in this location..." To replace all files you have to check "Do this for the next N conflicts" and then click on "Copy and Replace". Is there a keyboard shortcut for this (like there was in XP where you could simply press 'a')?

    Read the article

  • Accidental Extract Location - How to Clean Up?

    - by Gordon
    Sometimes I will do a command such as unzip tons_of_files.zip And I will forget to put a -d to point to a subdirectory. This causes the current folder to get filled with tons of files that are intermixed with the existing files. What is the best way to remove all these new files and/or move them to a new directory? I want to avoid having to manually examine the directory and determine if the file was part of the archive or was already present.

    Read the article

  • VirtualBox error with Ubuntu virtual machine

    - by user2985363
    I am trying to work on a coding project and cannot open my Ubuntu virtual machine with Oracle VM VirtualBox. I took a snapshot yesterday at about 11, and it was working fine. Several times I closed and reopened it. Today when I tried to open it, I kept getting the error below. Failed to open a session for the virtual machine Ubuntu 12.04 32-bit. VM cannot start because the saved state file 'C:\Users\Tyler\VirtualBox VMs\Ubuntu 12.04 32-bit\Snapshots\2014-01-30T19-59-05-976647800Z.sav' is invalid (VERR_FILE_NOT_FOUND). Deleted the saved state prior to starting the VM. I tried deleting the file as it said, but none of the snapshots would open still. The file is still in my recycling bin. What can I do? Also, I took the 1/31 snapshot today before I deleted the previous one.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • rsync windows to linux permission denied

    - by user64908
    Using Command rsync -avzP --delete --omit-dir-times ../../ [email protected]:/var/www/mysite/ I'm getting rsync: mkstemp "/var/www/mysite/.." failed: Permission denied (13) If ext is in the www-data group should I still set all the files to be owned by user www-data? I am trying to publish the files with rsync and then set the permissions using sudo chown -R www-data doc sudo chgrp -R www-data doc but I can't even rsync because of the permission denied. The SSH works fine, the rsync too except when it tries to write over or update some of the files in /var/www Client * Windows 7 * Cygwin 1.7.16 (GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)) * rsync version 3.0.9 protocol version 30 Server * Ubuntu 12.04 * Apache2 * Root Accounts [ubuntu,ext] * Groups [www-data] * sudo vigr has www-data:x:33:ubuntu,ext I have already configure this http://stackoverflow.com/questions/2124169/cwrsync-ignores-nontsec-on-windows-7 This article has also managed to confuse me http://unix.stackexchange.com/questions/41687/how-should-i-rsync-files-in-var-www-if-i-want-them-to-be-owned-by-www-data What is the right procedure?

    Read the article

  • Minimal backup for Windows 7 system recovery

    - by JIm
    There might not be an answer to this, but for a home Win7 system, what files/directories must be backed up to recover after a windows crash? I can reinstall software, and I keep data files elsewhere. When I use acronis home backup software to backup my "critical" files it seems to choose the entire partition. Updates are mostly browser cache files and the like. Or, after a crash, should I just reinstall windows. I dread the hours of windows updates that would require. Thanks.

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • How to copy windows explorer file selections and past filenames as txt

    - by Ricky Williams
    Win7 How can I select multiple files in explorer and get a string of text listing the file names like i would get if i selected the files from within "open file" window of an application. for example if i have a Dir that contains 100 jpegs and i select 01.jpg, 05.jpg, 10.jpg in windows explore via ctrl+clicks or shift+click how can i get a text string that reads " "01.jpg" "05.jpg" "10.jpg" with or without the selected files full path " with out dragging the selected files to another window? I've tried copy and pasting into Note Pad and Word Pad and it either past the picts into the the application or a empty clipboard.

    Read the article

  • How to save transferred MP3 on Centova cast?

    - by cHAKE
    We have uploaded and transferred the mp3 files on centova cast successfully with Filezilla. We know how to update the media library We know how to create playlists We know how to drag the songs into different playlists We know how to save the media or the playlists BUT, we are losing all the media library saved files (songs) the moment we get new files in media library. However we want to keep the media library and playlist saved.

    Read the article

  • Windows Network copy and access denied randomly

    - by The King
    I have a windows 2008 R2 server and I now installed a new bigger HDD into it. I wanted to copy big AVI files to the new server hdd what is shared on the local network. I have write access to the servers hdd and I can successfully copy smaller files to it. But when I copy bigger files more than 500MB randomly on the copy I get Access Deny message. If I use RDP I can copy files through RDP client. I checked error messages at the server but I didn't found any error about this access deny. Because of RDP copy works I don't think that this could be hardware error. I think this is some kind of software setting error. Someone has faced this kind of error? Or somebody has idea what could cause or how to find the root of the problem?

    Read the article

  • Seagate 3TB hard drive loses format information

    - by Victor Bugarin
    I have a Windows 7x64 Ultimate, 6 GB memory, 1 TB HD. 3TB Barracuda XT HDD. The HDD is installed on a StarTech 4 bays external enclosure I had troubles so I converted to a GPT, created 1 partition and formatted as NTFS. The hard drive I can write and read to and from the hard drive but it will become unreadable at some point while I am copying files or after I have copied files to it. I have copied large Bluray movies and diverse video files, I have also copied 32 GB of pictures, and I have copied about 86 thousand music files in different formats. At some point the partition becomes unreadable and I have to format the partition again (all files lost) and I have to start the whole process again. At some point I have been unable to copy large ISO (Bluray movies) file images. I have partitioned the HDD in 2 partitions P1 - 2TB, P2 - 1TB and I have lost every single file in either partition the same way. I reformat the HDD and it seems fine. I have run seatools to check the hard drive and it reports to be OK. What gives?

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • Opening a Corrupted winmail.dat file

    - by tearman
    I have a set of winmail.dat files that apparently have evolved from a set of emails with corrupted headers. It looks like Exchange 2010 changed the headers around sometime last September and basically rendered exported .eml files unable to open. Now the HTML/PlainText emails seem to do ok, but the files that use RichText (specifically Microsoft's TNEF format) will not open in any program, Microsoft or not. I've attempted to use many different non-Microsoft converters and they see it as a corrupted message as well. If I remove the headers of the email, rename it as a winmail.dat file, some emails will open in Word, but most won't. If you take a look at the email in a text editor, there are null characters EVERYWHERE that distort the email itself. Anybody have any experience with this and/or suggestions on how to at least open it?

    Read the article

  • SFTP over double server hop

    - by josh.trow
    I'm trying to work out a method to allow me to access files on an SFTP server than I cannot access from my local machine. Currently, I have to SSH to a remote server (it is in a certain IP block that the final SFTP server will accept from), then from there SFTP to the destination server. From there, I get the files I am interested in, thereby dropping them onto the middleman server, from which I can get the files either over a Samba share or with a direct scp. I also work in the reverse, where I drop the files onto the middleman, SSH to it then SFTP to the destination and put them into the appropriate folders. My goal is to shorten this. The unfortunate restrictions are that my machine is Windows (I use KiTTy and/or Cygwin) and I cannot modify the middleman server (or destination server) in any way. I am willing to use command line or GUI programs so long as it works and is free. Any ideas?

    Read the article

  • Adding default command line options when opening a particular filetype

    - by dbdkmezz
    I'd like to make it so that whenever I open a particular file type (by double clicking it in explorer) it always opens the associated program with particular command line options. So, for example, when double clicking a .tex file I want it to not only open it with emacs (which is easy to set up just by going into "Open With"), but run emacs with the command line option "-fs". What's the easiest way to do this? Thanks

    Read the article

  • How do I use XQuartz with ssh on OS X?

    - by cwd
    I've downloaded the latest stable version of XQuartz on my Snow Leopard machine, and I'm trying to make an ssh connection with X forwarding but X11 keeps opening. How can I get OS X to use XQuartz? I had X11 installed I downloaded and installed XQuartz X11 is not open / running XQuartz is open and running I try and connect to a remote system using iTerm2: ssh user@remote -X X11 opens. XQuartz is still open, but I doubt it is doing anything. I also tried moving X11 to the trash but then the ssh connection will not complete, even though XQuartz is open. I also get the two warnings which I don't understand how to fix, even after reading the ssh man page. Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding.

    Read the article

  • Remote I/O costs with a Content Delivery Network

    - by x711Li
    As far as I know, the time complexity of scanning a directory and the amount of files in said directory are correlated due to I/O costs. Would the administrative costs of placing the files in a hashed directory tree for uploading/downloading files through a CDN API be worth it for the added efficiency? For instance, given a filename foo.mp3, the MD5 hash for this is 10ebb1120767e9de166e0f5905077cb1. Thus, storing foo.mp3 in ./10/eb/foo.mp3 would allow for less files per directory (assuming MD5 generates patterns with in Base36, this allows for 36^2 root directories with 36^2 subdirectories each and little chance of hash collision) Considering the directories themselves are not loaded, would the I/O costs of directory scanning still exist with direct uploading/downloading?

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >