Search Results

Search found 45987 results on 1840 pages for 'copy files'.

Page 88/1840 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Force Windows 8 to search indexed files

    - by Hrvoje
    When using search files in Windows 8 (win+f) I don't get expected results. For example, I installed VLC, it's in Program Files (86) folder, and that folder is selected for indexing. Search for files (win+f) gives 0 results. If I pin to start that exe, then it's found - but I don't want to do this, that's not the point. Where does it search for files? Is there any way to specify search locations? It doesn't use Indexing Options settings, at least it seams so. Also, searching from explorer window is kinda slow - I tried entering VLC.EXE in search box (when in c:\ root), and it takes some time to give correct results. It works, but it looks like it doesn't use indexing, rather scan all files/folders, which is slow.

    Read the article

  • Nginx, logrotate and empty files

    - by tzulberti
    I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • How do I set the TEMP environment variable for the "Network Service" user?

    - by Chris Phillips
    We have a system that uses Path.GetTempFile and Path.GetTempPath calls to work with temporary files fairly frequently. This system also runs as the "Network Service" user. We're finding that we're running out of room on the C drive (for other issues, our temp files are cleaned up correctly) and would like to be able to move the temp directory to a different drive. The easiest solution to this seems to be to change the TMP or TEMP environment variables for the Network Service user, but I only seem to be able to set my own user or the "system" variables that are overwritten by the Network Service user profile. How do I set these variables for the Network Service user?

    Read the article

  • Linux program unable to access files in group

    - by user1064665
    I'm having trouble configuring things on linux so that a program can access certain files. Let's call it pgm A. It has uid uA and gid gA. In addition, uid uA is listed in /etc/group as a member of group gX. The problem is that pgm A cannot access files for which the uid is root and the gid is gX, but only when pgm A is called from another program, pgm B, which also runs as user uA. If I su as user uA and run pgm A from bash, it has no problem accessing files in group gX. But if another program, pgm B, which also runs as user uA, forks and execs pgm A, pgm A cannot access the files. I've verified that pgm A is indeed running as user uA, group gA, when launched from pgm B. So, if uA is a member of group gX, why can't the program access files which are readable by group gX? It's as if the operating system is ignoring the fact that user uA is also in group gX.

    Read the article

  • Get Zipped Logs from a Remote Server

    - by Jonathan
    I am tasked with trying to find a way to download zipped logs from a remote server. There are quite a bit of these logs and they are constantly created. I do have limited ssh access to the remote server and can scp or rsync the files. However, due to the sheer size of these logs file, I do not want to rsync all of them. The logs could get to terabytes and for rsync to compare them may take some time. I only want to get any new file that was created/last updated an hour ago. I also am worried that I will rsync logs that are in the process of being created, so I was thinking to only rsync files that were last modified 3-5 minutes ago. Would anyone be so kind as to help me with such a process? Thank you in advance.

    Read the article

  • How to rename multiple files in multiple folders with 1 command

    - by Charles
    We want to rename our *.html files to *.php but (sadly enough) have not enough knowledge to do it with a dos batchfile and/or cmd prompt command. The problem is that each file is in seperat folder and yes talking about 1500+ different folder names. Using wildcards for the files I know is the '*' but using also a wildcard for folders is unknown to me. We probably need to use the (MSDOS) 'FOR' command but there I am stucked. Folder structure we use is: parent-folder/child-folder/grandchild-folder/file.html sample: games/A/game_name/file.html, games/B/game_name/file.html, games/C/game_name/file.html and so on. The parent folder is for all files the same, the child & grandchild folders are different for most files. After renaming these files to .php I assume following in the .htaccess will make a permanent redirect. RedirectMatch 301 (.).html$ http://oursite.com$1.php Looking forward to suggestions/answers, thnx in advance.

    Read the article

  • Hadoop Rolling Small files

    - by Arenstar
    I am running Hadoop on a project and need a suggestion. Generally by default Hadoop has a "block size" of around 64mb.. There is also a suggestion to not use many/small files.. I am currently having very very very small files being put into HDFS due to the application design of flume.. The problem is, that Hadoop <= 0.20 cannot append to files, whereby i have too many files for my map-reduce to function efficiently.. There must be a correct way to simply roll/merge roughly 100 files into one.. Therefore Hadoop is effectively reading 1 large file instead of 10 Any Suggestions??

    Read the article

  • Converting Visio (.vsd) files to pdf automatically

    - by Aseques
    I am trying to create a scheduled task to convert all my .vsd files to pdf so all of our devices can read them (linux, mac, smartphones, etc..) and I would prefer not paying for something that can be done with Visio + PDFcreator. The approach of using openoffice doesn't work with .vsd files since it's not a supported format ( Method/tools for batch-converting Microsoft Word files into PDF?) What I've currently is this: 'C:\Program Files\Microsoft Office\Visio11\VISIO.EXE' /pt "Z:\Archive\Files.vsd",-PPDFCREATORPRINTER /nologo That is able to open automatically the document I want and to prepare it to be printed, the only missing part is that it requires me to confirm on the printing dialog. There's some information here: http://support.microsoft.com/kb/314392 but it doesn't explain abotu non interactive printing.

    Read the article

  • rename multiple files with unique name

    - by psaima
    I have a tab-delimited list of hundreds of names in the following format old_name new_name apple orange yellow blue All of my files have unique names and end with *.txt extension and these are in the same directory. I want to write a script that will rename the files by reading my list. So apple.txt should be renamed as orange.txt. I have searched around but I couldn't find a quick way to do this.I can change one file at a time with 'rename' or using perl "perl -p -i -e ’s///g’ *.txt", and few files with sed, but I don't know how I can use my list as input and write a shell script to make the changes for all files in a directory. I don't want to write hundreds of rename command for all files in a shell script. Any suggestions will be most welcome!

    Read the article

  • Converting Visio (.vsd) files to pdf automatically [migrated]

    - by Aseques
    I am trying to create a scheduled task to convert all my .vsd files to pdf so all of our devices can read them (linux, mac, smartphones, etc..) and I would prefer not paying for something that can be done with Visio + PDFcreator. The approach of using openoffice doesn't work with .vsd files since it's not a supported format ( Method/tools for batch-converting Microsoft Word files into PDF?) What I've currently is this: 'C:\Program Files\Microsoft Office\Visio11\VISIO.EXE' /pt "Z:\Archive\Files.vsd",-PPDFCREATORPRINTER /nologo That is able to open automatically the document I want and to prepare it to be printed, the only missing part is that it requires me to confirm on the printing dialog. There's some information here: http://support.microsoft.com/kb/314392 but it doesn't explain abotu non interactive printing.

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • Does Windows 8 include the Windows Help program (WinHlp32.exe)?

    - by amiregelz
    In 2011, Symantec reported on the use of the Windows Help File (.hlp) extension as an attack vector in targeted attacks. The functionality of the help file permits a call to the Windows API which, in turn, permits shell code execution and the installation of malicious payload files. This functionality is not an exploit, but there by design. Here's the malicious WinHelp files (Bloodhound.HLP.1 & Bloodhound.HLP.2) detection heat map: I would like to know if the Windows Help program exists on my Windows 8 machine by default, because if it does I might need to remove it for security reasons. Does Windows 8 include the Windows Help program (WinHlp32.exe)?

    Read the article

  • How to track a network users accessability on a document?

    - by BigBoy
    We have a document on common folder (Linux as server) on a network which can be accessed by log-in users. Is there way to track the users who really accessed (read/copy operations only) the document? We can log the users who accessed the network, but not sure if they really viewed/copied the document. How can we check this?

    Read the article

  • Access to certain files but not others

    - by ADW
    Hoping someone can help me as I have, thus far, been unable to solve the issue. I am running a media center utilizing Ubuntu 12.04. I was initially successful accessing media files from the desktop running Ubuntu via my Windows 7 laptop and Roku device. I started backing up a new batch of DVD's I had (into MKV files, like everything else in my media folders) and noticed I cannot access the new files from either the Roku or the laptop. I have not changed any settings in the media folder and verified the shared permissions. The parent folder (Media) is shared (with permission flow-down) while the subfolders (Movies, TV Shows, Music) are not. I have changed the permissions on this to include shared when the access problem arose but with no success. I can only access the original files uploaded an not new files added. Any suggestions??? Thanks in advance for any and all help.

    Read the article

  • TrueCrypt Corrupted Files

    - by B. Knight
    Several months ago, I needed to reorganize my data across multiple external hard drives with my laptops primary hard drive as the go-between. My external hard drives are all encrypted with TrueCrypt. It appears to me that somehow during the transfer of my files between the encrypted external drive an the unencrypted internal drive, the files were transferred "as-is" (in their encrypted state). The files range from very small to very large. It appears that this may have happened during one consecutive transfer session. Has anyone ever experienced this problem, and if so were you able to fix it? Is there a way to recreate the encrypted partition, transfer the files, and then decrypt them to their usable state? Or can the files somehow be decrypted through other means? UPDATE: I am running Windows 7 (x64) HP now, but may have been runninG ENT. then. Toshiba Laptop 650GB HDD / 4GB Mem. Latest version of TC

    Read the article

  • Move files contained in a certain dir to the previous one (centOS)

    - by Alex
    i will try to explain my problem (sorry for my bad english). I have an image gallery with a directory structure like that: images/dir1/subdir1/IMG/files.jpg images/dir1/subdir2/IMG/files.jpg images/dir1/subdir3/IMG/files.jpg images/dir2/subdir1/IMG/files.jpg ....... images/dir109/subdir1/IMG/files.jpg the directory named images contains 109 dirs (dir1,dir2, ... dir109), the 109 dirs totally have 1200 subdirs inside, every subdir contain a dir named IMG with images into it (file1.jpg file2.jpg etc ...), i would like to move all the images contained into every dir named IMG into the previous dir (subdir) to have something like that: images/dir1/subdir1/file1.jpg images/dir1/subdir1/file2.jpg images/dir1/subdir2/file1.jpg ........ images/dir109/subdir1/file.jpg

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Uploading.to Uploads Files to Multiple File Hosts Simultaneously

    - by Jason Fitzpatrick
    If you’re looking to quickly share a file across a variety of file hosting services, Uploading.to makes it a cinch to share up to 10 files across 14 hosts. The upload process is simple. Visit Uploading.to, select your files, check the hosts you want to share the file across (by default all 14 are checked), add a description to the collection, and hit the Upload button. Uploading.to will upload your file to the various hosts; during the process you’ll see which hosts are confirmed and which have failed. We had 2 failures among the 14 hosts which still left the file mirrored across a sizable 12 host spread–not bad at all. When you’re ready to share the file hit the Copy Link button at the bottom of the screen and share it with your friends. They’ll be directed to Uploading.to and will be able to select from any of the hosts the file was successfully mirrored across. Uploading.to is a free service and requires no registration. Uploading.to [via Addictive Tips] HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • HTML Parsing for multiple input files using java code [closed]

    - by mkp
    FileReader f0 = new FileReader("123.html"); StringBuilder sb = new StringBuilder(); BufferedReader br = new BufferedReader(f0); while((temp1=br.readLine())!=null) { sb.append(temp1); } String para = sb.toString().replaceAll("<br>","\n"); String textonly = Jsoup.parse(para).text(); System.out.println(textonly); FileWriter f1=new FileWriter("123.txt"); char buf1[] = new char[textonly.length()]; textonly.getChars(0,textonly.length(),buf1,0); for(i=0;i<buf1.length;i++) { if(buf1[i]=='\n') f1.write("\r\n"); f1.write(buf1[i]); } I've this code but it is taking only one file at a time. I want to select multiple files. I've 2000 files and I've given them numbering name from 1 to 2000 as "1.html". So I want to give for loop like for(i=1;i<=2000;i++) and after executing separate txt file should be generated.

    Read the article

  • Apache htaccess results in files being downloaded instead of displayed

    - by chrissik
    So I had this "beautiful" website that did exactly what I wanted it to do. Then I shut down my PC, reboot and...the pages just download now instead of being displayed. I re-installed XAMPP and launched Apache again and I was able to identify the .htaccess file as the cause of the problem. Options +FollowSymlinks RewriteEngine on RewriteCond %{QUERY_STRING} !^desktop RewriteCond %{HTTP_USER_AGENT} "android|blackberry|googlebot-mobile|iemobile|iphone|ipod|#opera mobile|palmos|webos" [NC] RewriteRule ^/?$ /mobile/index [L,R=302] RewriteRule ^/?$ /de/index [R] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html Here is the problem I guess: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html This should make it possible to use /de/index instead of /de/index.html - but somehow it causes the page to download if I open localhost/de/index (but with localhost/de/index.html it works fine...). I'm using HTML Sites with SSI Elements on a Apache web server. The only other file that is different to the out-of-the-box ones is the httpd.conf, where I enabled SSI: AddType text/html .shtml AddHandler server-parsed .shtml AddHandler server-parsed .html AddHandler server-parsed .htm Options Indexes FollowSymLinks Includes AddOutputFilter INCLUDES .shtml Options +Includes So I hope there is somebody among you that can help me with this annoying problem as I'm quite desperate... for some reason, even without the problematic lines Chrome keeps downloading the files (even if I delete the .htaccess file), while IE and Opera display the pages. Edit: Now Opera also wants to download files (whether index.html or index are called).

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • pdflatex reads .eps files saved in OS/X, but not in Ubuntu

    - by David B Borenstein
    Sorry if this is a stupid question; I'm a newbie. I am preparing a manuscript in LaTeX. The journal (Physical Biology, an IOP publication) requires that figures be saved in .eps format, so I am trying to do that. However, I cannot get my LaTeX file to build when I have generated the .eps files on my Ubuntu computer. If I save the images on my Mac, the file build just fine. So far, I have tried saving images in ImageJ, FIJI and Inkscape. The same problem occurs in all three. When using kile, I get the following error: /usr/share/texmf-texlive/tex/latex/oberdiek/epstopdf-base.sty:0: Shell escape feature is not enabled. In TexWorks, the error is different, but still there: Package pdftex.def Error: File `./figures4/figure4a-eps-converted-to.pdf' not found. Now, if I fire up Inkscape, FIJI or ImageJ on OS/X, everything works fine. The Mac also can't build with the Ubuntu-saved images. The images generated on the Ubuntu machine open fine using Document Viewer. I am building the same LaTeX file on both computers, with the exact same results. The header of my LaTeX file is: \documentclass[12pt]{iopart} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{parskip} \usepackage{color} \usepackage{iopams} And then the code for the figure is: \begin{figure} \center{\includegraphics[width=4in] {./figures4/figure4a.eps}} \footnotesize{\caption{ \label{fig:4a} (4a) lorem ipsum dolor sic amet.}} \end{figure} I'd be happy to send an example of both .eps files. Again, sorry if this is a dumb question. I tried everything I could think of before posting here. Thanks, David

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Search inside Xournal files (.xoj)

    - by Javad Sadeqzadeh
    I'm a big fan of Evernote, I use it regularly. However, it has a 60MB storage limit (although text files are not going to occupy much space, but the limitation concern still remains). Today, I installed Xournal, which has great features like annotating, nice background, free hand shapes and notes, save in PDF format, and many more. But the big problem is that as far as I've noticd, there is no intrinsic feature for seach inside the notes (created using Xournal with .xoj suffix). I used Catfish File Search application (which creates bash commands for full text search), but it couldn't help as well. Is there anyway to search inside a .xoj file at all? If so, it could be a suitable alternative to evernote, if you put your .xoj files on a cloud (which certainly offers you much more storage space than 60MB). If not, is there any other convenient app similar to Evernote, but with higher storage limit or without a limit? Somebody suggested Zim desktop wiki app, which looks great, but I'm nut sure if I could copy and paste everything there (a mixture of photos and tables and text with various formats and highlights), like what I do with Evernote. And a very useful tool I use is Evernote Web Clipper (browser extension). Of course, having a desktop client like Everpad is a plus, but not the absolute need. PS: I use pocket, so please do suggest that (it only preserve links (which might change over time) not the actual text). I also use google drive or docs, I don't like that for this purpose niether, it's too slow, doesn't have a browser extension and a desktop client. Thank you so much in advance.

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >