Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 141/1877 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Windows Search not searching in files

    - by Cylindric
    I am trying to get Windows Search to work on my Windows Server 2008 SP2 fileserver, so I can search in files for content. I have added the Windows Search Service role to the server, and using the right-click properties in Explorer set some folders to "Index this location". The problem is that neither on the server or remotely can I search in the files. I seem to get some inconsistencies in the GUIs, for example the "Indexing Options" panel shows me just 6 locations indexed, but if I click "Modify" I see nearly everything ticked. For example, the "SeachTest" folder under "infrastructure" has the "index this location" option ticked, but the "Projects" folder does not. I assume this is why some are grey and some not, but they are all ticked. T The "SearchTest" folder contains some files that have nothing but the text PurpleOrange in it, so I should be able to find those. So, to summarise: Which locations are indexed? The ones in the "Index these locations" list, the ones ticked, or the ones not greyed-out in the list? How do I get to the state where I can click in the search box and type PurpleOrange and see the files?

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Non-restored Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Memory Usage by Mapped Files (Win7 64Bit)

    - by Dexter
    When copying files from/to my external USB3 HDD memory usage in Win7 goes up to 100% and remains there. I'm not sure whether this is a problem caused by faulty drivers or not, but I already have the current version of them (Etron USB3 controller on a Gigabyte 990fxa board).. Using RAMMap it becomes obvious that the files, that are to be copied, are mapped into memory. Clicking on Empty > Empty System Working Set seems to temporarily fix the problem (without causing any trouble with the file copy process), but it needs to be done every few seconds. Is there any way to schedule this operation to happen ever 10 or so seconds on its own? What underlying system command is RAMMap using? Or, alternatively, is there any way to limit how much RAM mapped files may use in Windows 7? I know mapped files would usually be removed from memory if other programs need the memmory, but while memmory usage is at 100% the system starts freezing up for half a second or so everytime I click anything .. thus the automatic removal of unused memory contents seems to be failing here.

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Git: Removing carriage returns from source-controlled files

    - by Blixt
    I've got a Git repository that has some files with DOS format (\r\n line endings). I would like to just run the files through dos2unix (which would change all files to UNIX format, with \n line endings), but how badly would this affect history, and is it recommended at all? I assume that the standard is to always use UNIX line endings for source-controlled files, and optionally switch to OS-specific line endings locally?

    Read the article

  • How to decrypt encrypted files using a PEM private key

    - by Phil Cole
    I have files which have either been encrypted with a public key and the Blowfish algorithm, or a public key and the AES-256 algorithm. I'm looking to put together a perl script that would be able to use the private keys (which I do have) to decrypt the files. The public and private key files are all in PEM format, and while I can find ways of reading the PEM files, and ways of decrypting data with a key, I haven't yet found a way of going from PEM - key. Any suggestions?

    Read the article

  • SVNKit , show list of files to commit

    - by Jam
    Hi, I almost use SVNKit API. I make my client and I can not find a way to show files that can commit. In some of the clients such as Tortoise, we have change dialog with a list of files that have been modified. And we can choose files for "commit". How can I extract the names/path of these files? Does API allow you to do? Thank you in advance

    Read the article

  • git difftool, open all diff files immediately, not in serial

    - by Seba Illingworth
    The default git diff behavior is to open each diff file in serial (wait for previous file to be closed before opening next file). I'm looking for a way to open all the files at once - in BeyondCompare for example this would open all the files in tabs within the same BC window. This would make it easier to review a complex set of changes; flick back and forwards between the diff files and ignore unimportant files.

    Read the article

  • Finding duplicate files by content across multiple directories

    - by gagneet
    I have downloaded some files from the internet related to a particular topic. Now I wish to check if the files have any duplicates. The issue is that the names of the files would be different, but the content may match. Is there any way to implement some code, which will iterate through the multiple folders and inform which of the files are duplicates?

    Read the article

  • Commenting C code, header and source files

    - by pygabriel
    I'm looking for a "best practice" to document my C code. Like in any project I have some header files ".h" and the respective source file ".c" In the header file what kind of comment you put in? And in source files? The question arise up because since I commented well my header files, the c files looks like a mess. What's your best practices in keeping the code well commented?

    Read the article

  • Why doesn't git commit -a add new files?

    - by splintor
    I'm a bit new to git, and I fail to understand why git commit -a only stages changed and deleted files but not new files. Can anyone explain why is it like this, and why there is no other commit flag to enable adding files and committing in one command? BTW, hg commit -A adds both new and deleted files to the commit

    Read the article

  • Prevent command "del /s" from entering a folder

    - by jzuniga
    I need to recursively remove unnecessary files from a svn repository and i have the following batch file to do this: @echo on del /s ~*.* del /s *.~* del /s Thumbs.db However, this is also deleting the entries under the .svn/ subfolders. Is there any way to prevent this commands from being executed under the .svn/ folders so that it doesn't mess things up? Thanks in advance! EDIT: A solution using Bash (cygwin) would also work for me since i just need to do this once.

    Read the article

  • where to find Ubercart translation files

    - by ernie
    I am trying to update language specific text files ("po files") for Ubercart, but it is unclear who/where these files are maintained. There are several places sited but I am not sure which is maintained? http://ftp.drupal.org/files/translations/6.x/ubercart/ http://l10n.privnet.biz/translation_group/ Also a description of how to do this in Drupal. In Drupal (link: admin/build/translate/import) there are several text groups to select. Do I have to repeat update for each group?

    Read the article

  • How do web browser games access temporary files

    - by Phenom
    There is a web game that I play and I used fiddler to see what temporary files it downloaded. While I was playing I deleted all those temporary files including the sounds and flash files. But it didn't affect the game at all. Why is that? I checked in fiddler and it doesn't look like the files were redownloaded.

    Read the article

  • Log4Net & RollingFileAppender to generate Xml files

    - by SaguiItay
    I've managed to configure Log4Net with a RollingFileAppender in order to generate Xml files. However, the generated files are not valid XML files until a "roll" is performed - the XML doesn't have a closing XML tag. Basically, this prevents to files from being read until that are "closed"/"rolled". Anyone else encountered this issue? I my previous (custom) solution I had to write the closing tag after writing each entry, and overwrite it with the next entry... :(

    Read the article

  • How to search Jar files using Windows Search?

    - by Marcus
    I believe back when we were on Win2K, Windows Search would search through Jar files to locate specific classes but this doesn't appear to work in XP. Does anyone know how to enable this in XP? Note, to do the search in Win2K we just entered *.jar for the files and "ClassABC" for the search text string and the search would return any jar files containing class files where the title contained "ClassABC".

    Read the article

  • FLEX components: updating import statements to move the component into another folder

    - by Patrick
    hi, I've just imported a Flex component into my project. I have a theory question about importing. all the imports statements in the component source files started with "com.subFolder.etc", but I have preferred to move the component folders into "componentName" and to replace all import statements as "componentName.com.subFolder.etc" Is this ok ? Everything works perfectly, but I was wondering if the method is correct. thanks

    Read the article

  • Creating .lib files in CUDA Toolkit 5

    - by user1683586
    I am taking my first faltering steps with CUDA Toolkit 5.0 RC using VS2010. Separate compilation has me confused. I tried to set up a project as a Static Library (.lib), but when I try to build it, it does not create a device-link.obj and I don't understand why. For instance, there are 2 files: A caller function that uses a function f #include "thrust\host_vector.h" #include "thrust\device_vector.h" using namespace thrust::placeholders; extern __device__ double f(double x); struct f_func { __device__ double operator()(const double& x) const { return f(x); } }; void test(const int len, double * data, double * res) { thrust::device_vector<double> d_data(data, data + len); thrust::transform(d_data.begin(), d_data.end(), d_data.begin(), f_func()); thrust::copy(d_data.begin(),d_data.end(), res); } And a library file that defines f __device__ double f(double x) { return x+2.0; } If I set the option generate relocatable device code to No, the first file will not compile due to unresolved extern function f. If I set it to -rdc, it will compile, but does not produce a device-link.obj file and so the linker fails. If I put the definition of f into the first file and delete the second it builds successfully, but now it isn't separate compilation anymore. How can I build a static library like this with separate source files? [Updated here] I called the first caller file "caller.cu" and the second "libfn.cu". The compiler lines that VS2010 outputs (which I don't fully understand) are (for caller): nvcc.exe -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" -clean and the same for libfn, then: nvcc.exe -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2010 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" and again for libfn.

    Read the article

  • Using find and tar with files with special characters in the name

    - by Costi
    I want to archive all .ctl files in a folder, recursively. tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print` The error message : tar: Removing leading `/' from member names tar: /home/db/dunn/j: Cannot stat: No such file or directory tar: 74.ctl: Cannot stat: No such file or directory I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >