Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 535/1979 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Real time writing to disc

    - by Jesper
    Hi, Is there any software out there that can help me track in real time, files being changed and/or created on my windows (windows 7) system? Im trying to figure out all files being changed when setting up Windows Live Mail as I want to sync all relevant files between two computers. And no, the storage folder is not enough. grateful for any help, jesper

    Read the article

  • Get tortoisesvn to give me filenames with build number in the filename

    - by EricJLN
    I am on a Windows 7 box, and I have tortoisesvn on my machine. After getting a little familiar with svn and tortoisesvn on a code repository, I set up a local repository to manage revisions of some word and powerpoint documents. I want to figure out some scripted way to output a set of files with the build/revision number embedded in the filename. I will then email the files to some business people to review. For example, say I have a group of files in my working directory: PresentA.pptx PresentA-notes.docx PresentB.pptx and TortoiseSVN repo browser tells me that I am currently at revision 21 for PresentA.pptx and PresentA-notes.docx but at revision 25 for PresentB.pptx, I would like some way to get 3 files with the following names: PresentA-r21.pptx PresentA-notes-r21.docx PresentB-r25.pptx Alternatively, if revision 25 is the current value for the repository, having all the names appended with -r25 would work, too.

    Read the article

  • What alternatives are available for shared folders encryption in Windows 2003 Server?

    - by snakepitar
    People in our company asked to encrypting some of the shared folders published in a local Windows 2003 File Server. The requirements are: Encrypt the files, so only a user or group or users can open them Avoid password protected files. The encryption process should be transparent to the users Though files are encrypted, the backup software (BackupExec) must be able to copy and access binary for verification Cannot install tools/software in user's PCs, they want this to work automatically As we have very little experience managing servers, we'll be grateful for any help or suggestion offered.

    Read the article

  • There is not enough space on the disk when there is?

    - by Lee Tickett
    Permissions are fine (inherited) and checking effective permissions everything is AOK. As you can see i can make a file in the docs folder but not the pdf_docs subfolder. The folder has a lot of files and is quite large- i wonder if i've reached a limit? I couldn't find anything on google. Size: 51.0 GB (54,819,804,885 bytes) Size on disk: 52.0 GB (55,925,719,040) Contains 554,697 Files EDIT I've just checked and i can delete files... and for every file i delete i appear to be able to create a new one. This definitely points toward a limit in terms of number of files?

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Joomla 1.5 Media Manager sets incorrect file permissions when uploading

    - by Scott Mayfield
    Howdy all, I have a Joomla 1.5 installation running on Windows Server 2008, installed via the Web Platform Installer. When uploading images with the media manager (native uploader, not the flash bulk uploader), the files arrive on the server correctly, but are given incorrect permissions. Specifically, the IIS_IUSRS group is not given access to the file. I might be incorrect about what group/user is SUPPOSED to get access to the files, but so far, I've found that unless I give IIS_IUSRS access to the uploaded files, they won't appear on the site or in the media manager (appear as broken images). Once I give IIS_IUSRS permission to the files, they work fine. So far, all the research I've done has led me to linux specific fixes that involve either changing the umask on the server, or directly modifying the Joomla codebase to add an appropriate chmod command to the upload process, but I really don't want to modify Joomla directly. I have to believe there's a setting here somewhere that will do the job, either on the Joomla or Windows side of the equation. Any thoughts? Scott

    Read the article

  • Robocopy hiding folders on backup drives

    - by Neil Barnwell
    I have a backup batch file that uses Robocopy to backup my files: robocopy "C:\" "G:\Default\RoboCopyBackup\C" /XF Pagefile.sys /XD "System Volume Information" "Recycler" "Temporary Internet Files" "Installer Cache" "Temp" /E /R:1 /W:0 /TEE /XJ This should create a folder structure on the external backup drive like so: G:\Default\RoboCopyBackup\C\... However, G: appears totally empty. What is weird, is that the folders and files are there! If I type the above path into the address bar, I see all the files and folders! Can anyone help me work out why? I think it might be some NTFS-based ownership/permissions thing but I'm not sure.

    Read the article

  • Radeon HD4850 serious issues when using DirectX 10

    - by ricsmania
    Hello, I have a problem with my video card. Whenever I run a DirectX 10 game, it works for a few seconds (10 or so) and then starts displaying nothing but big polygons. I have tested this with Crysis and Resident Evil 5, both have the same problems. The same games running under DirectX 9 work fine, except for some small black squares once in a while. I have the following specs: Asus P7P55D LE Intel Core i5 750 Sapphire Radeon HD4850 1GB 2x2GB Patriot Viper II Sector 5, DDR3 1600 MHz OCZ Stealth X Stream 500SXS 500W At first I thought it could be the video card overheating (it has stock cooling), but the game crashes even when it's running at 50 degrees C, and it's never been higher than 70. I also thought it could be the PSU, but as far as I know 500W is enough for this computer, especially because I haven't overclocked anything. My OS is Windows 7 X64 and I am using Catalyst 10.10, but I have also tried many older versions with no success. I don't think there is a problem with the card itself, or else it wouldn't run DirectX 9 games I believe. I have spent many hours searching for a solution but I couldn't, so any help is appreciated. Thank you. EDIT: I did some further investigation about the problem, and it seems taspeotis was right, it might be related to memory. I slightly underclocked the memory from 993 to 965 MHz and the problem went away completely. Both the black squares using DirectX 9 and the weird polygons using DirectX 10. I was using RE DirectX 10 Benchmark, as it consistently crashed around the same point, and now I can play the full benchmark with no artifacts at all. Unfortunately, the underclock has an obvious hit in performance. Although it's not critical, it's definitely noticeable. So, if the video memory test software showed no erros, but the card needs an underclock to work, what might be the problem? Temperature? Voltage? By the way, I couldn't find what the default voltage for this card is. And what is a good software to try and increase it? I tried Ati Tray Tools but it has a bug that increases the clock speed dramatically whenever I change something in the Overclock tab, so I'm afraid it might fry my card. Worst case scenario, if I don't find I solution I will try to slightly increase the GPU clock to compensate for the memory clock. Thank you again.

    Read the article

  • What benefit do I get from using a 64-bit server?

    - by blockhead
    I bought a small 256MB slice from slicehost and installed Ubuntu 10.04 64bit and wordpress on it. Performance was dismal as apache was eating up all my memory. Once I did some taming of apache and switched to fCGI things ran fine. Next I rebuilt as a 32 bit server, and performance was much better. What benefit would I get from a 64 bit server. Is it all about the memory?

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • Screenflow file type convert to AVI?

    - by Dave
    I've got a couple of large files 2 - 3GB each which were of a training course where the instructor used Screenflow on the Mac to record all his keypresses. I'm currently on a PC.. Problem: how to convert from .screenflow (and associated .scc files) to AVI or something a PC can play? Problem2: If I borrow a Mac can I d/load http://www.telestream.net/screen-flow/overview.htm (which I think was the package) and convert the files?

    Read the article

  • Read floppy from OpenVMS machine

    - by Goyuix
    I have a floppy I need to read the contents from - unfortunately it was formatted and the data written on an OpenVMS server. I believe the floppy is formatted "Files-11" and I can see parts of the MFT [equivalent] and file contents through a hex editor, however I would love to be able to mount this and actually read the files off. Is there a Files-11 FUSE module or other kernel module I can install to read this format? Any standalone utilities that can understand a floppy image taken with dd?

    Read the article

  • Deleting another user's diretories from my own

    - by kwatford
    I am a non-root user, and have made a directory into which other users in my group can write. The directory is setgid, so files and directories within it have the same group. I can delete files placed into this directory, but if a user creates a subdirectory with files in it, I can't seem to delete those. Is there something special I can do (other than, say, bothering the user in question or the sysadmin about it) to get rid of this subdirectory?

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • How to convert tar file from gnu format to pax format

    - by nosid
    On the one hand I have a lot of tar files created with gnu format, and on the other hand I have a tool that only supports pax (aka posix) format. I am looking for an easy way to convert the existing tar files to pax format - without extracting them to the file system and re-create the archives. GNU tar supports both formats. However, I haven't found an easy way to the conversion. How can I convert the existing gnu tar files to pax?

    Read the article

  • Ram cache on Windows Server 2008

    - by Jonas Lincoln
    Scenario: We have a file cluster on a UNC share. A couple of IIS web servers serve files from this UNC share. This is done through a IIS-module, and this module does not use the built-in IIS-caching feature. We'd like to cache the files from the UNC share in a ram disk. So far, we've found this product: http://www.superspeed.com/servers/supercache.php Are there other products that can help us cache the files from the UNC-share in ram?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Should this folder called Data be indexed?

    - by panny
    In the indexing options of Windows 7 there is a folder called Data which is excluded from indexing for the C:\ drive by default. Can someone confirm this, please? I was not able to locate that folder on my drive, nor include it in the search index. The difference in number of indexed files is unsatisfying: windows-7 native indexing service:377703 files on six drives; third party desktop search indexing service:698654 files on the same number of drives. Files in UA Control seem not being indexed without proper priviledges. How can this be circumvented?

    Read the article

  • How do ulimit -n and /proc/sys/fs/file-max differ?

    - by bantic
    I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files?

    Read the article

  • Filesystem to quickly get recent modifications

    - by liori
    Hello, I've got relatively big filesystem (ext4) with lots of small files and I'd like to backup it. Making full backups often is not feasible to me so I want to have a way to make differential/incremental backups (differential preferred). But... this is laptop, and scanning for changed files takes lots of time. My questions: 1) Is it possible to get list of files changed since some date from ext4's journal? I know it wasn't designed with this idea in mind, and it might be too small for bigger timespans, but maybe it is somehow possible? 2) Is it possible to monitor filesystem modifications and maintain a list of changed files reliably? I think I could use inotify, but this might be too slow to monitor full filesystem and might be unreliable. (by reliable I mean either I get all modifications since last backup (and this list is not missing anything) or an error message). Laptop runs Debian unstable.

    Read the article

  • Windows Vista - overlay icon with two people

    - by abcdefghijkl
    I had to save data from my harddisk to an external drive (with linux) and after reinstalling Windows Vista (and copying the files back) there is a strange overlay icon with two people. How do I get rid of this ? First I thought it could be shared, but the files are not shared. The user is the owner of all those files and they are accesible to everyone. Any ideas what Vista would like to say to me with these icons and how I get rid of them ?

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >