Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 65/1833 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Option to save project files for later use in Dreamweaver?

    - by Lup T. Ma
    Does anyone know of an extension or other way to allow me to save a set of files in a project for later use? Example: - Working on site A, opened html files A1-A15 (15 files) Received a request to work on site B, new files (number unimportant). I would like DW to remember that I was working on files A1-A15. Close the site A files and focus on just files from site B. Complete site B work. Reopen site A files altogether. Suggestions are greatly appreciated. Thanks!

    Read the article

  • dealing with a large flat data files with a very big record length

    - by gsp
    I have a large data file that is creatd from a shell script. Next script processes it by sorting and reading several times, that takes more than 14 hours, it is not viable. I want to replace this long running script with a program, probably in JAVA, C, or COBOl, that can run on windows or on sun solaris. I have to read a group of records everytime, sort and process and write to the output sort file and at the same time insert into db2/sql tables.

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Nginx redirect all request that does not match a file to a php file

    - by cyrbil
    I'm trying to get all request to: http://mydomain.com/downloads/* redirect to http://mydomain.com/downloads/index.php except if the requested file exist in /downloads/ ex: http://mydomain.com/downloads = /downloads/index.php http://mydomain.com/downloads/unknowfile = /downloads/index.php http://mydomain.com/downloads/existingfile = /downloads/existingfile My current problem is I have either the redirection to php working but static files not served or the opposite. Here is my current vhost conf: (which redirect fine but static files are send to php and fail) server { listen 80; ## listen for ipv4; this line is default and implied server_name domain.com; root /data/www; index index.php index.html; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ^~ /downloads { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; try_files $uri @downloads; } location @downloads { rewrite ^ /downloads/index.php; } # pass the PHP scripts to FastCGI server # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } Precision: static files are symlinks created by /downloads/index.php Thank you for your help.

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • How to hide files in Apache 2.2 WebDAV Directory listings

    - by mdornsf
    I use Apache 2.2 as WebDAV file server to a bunch of Mac and MS Windows clients. Unfortunately both clutter the filesystem with files like .DS_Store or thumbs.db. Since hte files distract my users i want to hide them from directory listings. Unfortunately the standard way of hiding files in Apache (via IndexIgnore) seems not to work via WebDAV. Is there any other way to hide files?

    Read the article

  • How can access files on shared drive from Windows 2008 server configured with SFTP

    - by communicator
    I have installed OpenSSH on my windows 2008 server by following the user guide here . Now I have some files on windows network share with UNC path as \\corp\test\testdata I want map this file system on network share to my windows 2008 server which is configred with SFTP so that I can access these files from my Java Program by doing SFTP to windows 2008 server.Is there anyway I can map the network share to C or other drive in server so that all the files on the share will be available as local files on the server?

    Read the article

  • Emacs open files from a filename list

    - by crasic
    I have a largish tex project that is separated into several tex files. Everytime I want to work on it I open emacs and manually C-x C-f all the files that I want to work on. I was wondering if there is a way to open files (from command line) from a file containing a list of filenames, something like filelist.txt: file1.tex file2.tex file3.tex then do cat files | emacs -nw except that emacs doesn't support the command used as it doesn't like that stdin is reassigned. any ideas?

    Read the article

  • View/Find all compressed files on the server?

    - by Volodymyr
    I need to find all compressed files/folders regardless of file format on a Windows Server 2003 machine. Search options do not provide this capability. Is there a way to list/view all compressed files? Perhaps, this can be done by PowerShell using file/folder attributes and put into a txt file with file location. UPD: Under compressed files/folders - I mean files which appear in blue color in Explorer after changing file/folder attribute.

    Read the article

  • Junk files appeared in my OSX root folder?

    - by user68732
    I see a bunch of junk files in the root folder of my hard drive running Snow Leopard. Here is a screen shot of the files: If I inspect these files, they appear to contain XML with DeviceCertificate, DevicePublicKey, and SystemBUID information, among other XML elements. I do iOS development on this machine. Can anyone explain from where these files came, and if they are an indication of anything serious, such as malware or spyware?

    Read the article

  • Cannot share files on USB drive between Windows 98 and Windows 2000

    - by Ken Pespisa
    I've run into a strange situation where I can't share files between Windows 98 and Windows 2000 using a USB flash drive. Files I put on the Win98 machine can be read by that machine, but not by the Win2k machine. And likewise, I can add and read files on that drive from the Win2k machine, but those files don't appear on the drive when accessed from the Win98 machine. Anyone have ideas as to what could be the cause of this?

    Read the article

  • How to search inside files in Windows 7?

    - by Revolter
    In Windows XP we can search for files witch contains a defined keyword (inside all files types) Windows 7 can look inside files for a keywords, okay, but only for text files. (*.doc,*.txt, *.inf, ...), not (*.conf, *.dat, *.*, ...) Microsoft search filters don't contain any filter I can use for this. Any idea?

    Read the article

  • WINDOWS - Deleting Temporary Internet Files through Group Policy

    - by Muhammad Ali
    I have a domain controller running on Windows 2008 Server R2 and users login to application servers on which Windows 2003 Server SP2 is installed. I have applied a Group Policy to clean temporary internet files on exit i.e to delete all temporary internet files when users close the browser. But the group policy doesn't seem to work as user profile size keeps on increasing and the major space is occupied by temporary internet files therefore increasing the disk usage. How can i enforce automatic deletion of temporary internet files?

    Read the article

  • Using find and tar with files with special characters in the name

    - by Costi
    I want to archive all .ctl files in a folder, recursively. tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print` The error message : tar: Removing leading `/' from member names tar: /home/db/dunn/j: Cannot stat: No such file or directory tar: 74.ctl: Cannot stat: No such file or directory I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?

    Read the article

  • Sort files in folders by size (Mac OS X)

    - by Željko Filipin
    I have a folder full of folders and files. I want to sort files by size (so I could remove the largest files). I know how to do that in Windows Explorer, but I can not find a way to do it in Mac OS X Finder. Windows 2003: open folder in Windows Explorer click button Search leave Search for files or folders named and Containing text text fields empty click button Search Now sort by size Is there a way to do something like this in Finder on Mac OS X?

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • Force-refreshing only JavaScript files in Firefox and Chrome

    - by Graviton
    I want to clear only JavaScript files from my web browsers (Firefox and Chrome). I am doing JavaScript debugging, and it's annoying that my JS just won't get updated whenever I change my JS files. The only thing I can do now is to clear my cookies, but doing that erases all of my browsing history. How can I clear/refresh the JavaScript files that have been loaded into my browsers without clearing out other files?

    Read the article

  • Recovering files using Recuva

    - by Nev Meek
    I'm currently using Recuva to recover some files from an external NTFS disk. It finds the files I'm interested in during it's analysis phase (when tools like test-disk fail to find them at all) and reports them as "Not-deleted" and a big green marker to signify 100% chance of recovery. However when it tries to recover the files I get a "the system could not find the file specified" message. Is there any easy way to recover non-deleted files off of a disc that I can no longer simply access through explorer?

    Read the article

  • mpasdlta files -- what are they?

    - by Tmdean
    I noticed a bunch of folders in the root of my hard drive named with a string of hex digits that contain files named with a GUID ending with "mpasdlta.vdm" and "mpavdlta.vdm". From some Googling, I've determined that these files are spyware and virus definition files used by Microsoft Security Essentials. Are these files safe to delete? (Why doesn't Microsoft follow their own guidelines and store application data in the folders intended for that purpose? grumble grumble)

    Read the article

  • force unzip to also delete any missing files

    - by Magnus
    Currently when I unzip into a directory with pre-existing files, I sometimes unzip an archive to update the files, using -f or -u or -o to overwrite any clashes. However I would like the unzip process to also delete any files which were not part of the archive, so that the unzipped version fully matches what was in the zipped archive. (Why not just replace the directory then with a fresh unzip? Because I still want to preserve .svn files, just wipe everything else)

    Read the article

  • Open dialog box won't show files in libraries, but explorer will

    - by Alex
    I have the weirdest problem when trying to open or save files. When I try to get to "My Documents" through the "Libraries" side link it won't show any of my files. It will show them if I go around from the C:// drive into the user files though. I thought it was because I didn't have the right location defined for the "Libraries" shortcut, but when I use "Explorer" to open my "Libraries" it shows all the files. Any ideas?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >