Search Results

Search found 69503 results on 2781 pages for 'file listing'.

Page 22/2781 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How to track down a file descriptor leak?

    - by cclark
    I have a java process (Glassfish) which is leaking file descriptors. I know this because I get the helpful java.io.IOException: Too many open files exception. I can look in /proc/PID#/fd and see all the open file descriptors. When I use lsof I get a very large number of entries like this: java 18510 root 8811u sock 0,4 1576079 can't identify protocol java 18510 root 8812u sock 0,4 1576111 can't identify protocol java 18510 root 8813u sock 0,4 1576150 can't identify protocol I see 12 new ones created per minute. What options can I use on lsof or what other tools are available to me to help track down socket file descriptors where the protocol can't be identified? thanks, chuck

    Read the article

  • Keyboard Navigation of File Open Dialog in Windows 7

    - by dkusleika
    In Windows XP standard File - Open dialog, the top has a "Look In" box. I can press Alt+I to drop down a tree of the disks folders and easily navigate to other folders or network shares. In Windows 7, I can't seem to navigate the File - Open dialog as easily. The best I've been able to muster is to tab 5 times (in Excel 2007, but I assume it's a windows standard), then use arrow keys or use Alt+arrow keys like a browser to get around. It's simply not as good because I can't see the whole tree at once. Is there a way to see the whole folder tree? If not, do you have any other tips for keyboard navigation of the file open dialog in Windows 7?

    Read the article

  • MS Word reports files read-only on Win Server 2003 file server

    - by Larry Hamelin
    I'm not a sysadmin, but I play one on TV: I'm trying to fix a problem for my mom's tiny non-profit company's server. I set up a Windows Server 2003 machine as a domain controller and file server. Everything has been working well for a few months, but lately when she tries to save changes to a Word (Office XP) document stored on the server, Word will intermittently report that the file is read-only. Saving to an alternate file in the same directory works, and when she closes Word and re-opens the original document, it'll save changes just fine. No one else ever has these files open. I've checked security and share permissions, and everything's OK. We've tried rebooting the server, but the problem continues, but intermittently. I have no clue what's going on. Help!

    Read the article

  • How to recover unsaved PSD file on MacOSX

    - by cenk
    Adobe Photoshop creates temporary *.psb files for emergency recovery at this path: ~/Library/Application Support/Adobe/Adobe Photoshop CS6/AutoRecover The files created have names like _Untitled-10FDB62ECBABBFF5C8EAD958EBC9CFAE2E.psb with current user:group as designated owner. If you save the file you are working on OR you hit "don't save" when prompted, the temporary files are deleted. Now, system creates and deletes these files. I am trying to recover the emergency file but I think the "undelete" utilities were created assuming the "user" deletes the file - like going into the trash bin and then emptying the trash... Anyone having experience about this? Thanks.

    Read the article

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • Old hard drive file permissions still there

    - by blsub6
    I have a new hard drive, put Windows 7 on it and want to get all the files off of my old hard drive. I put in my old hard drive as a slave drive. I can see the files but when I try to move 'em, it tells me that I'm not the owner of the file. I try to take ownership of the file and it doesn't work (it doesn't tell me that I can't take ownership of it, it goes through, just gives me the same error when I try and open the file again). I've tried modding the permissions, no dice. Anything else I can try?

    Read the article

  • Fixing mac user file permissions, not the system

    - by Cawas
    Usually those files get wrong permission when coming from the network, even when I copy them from it, but mostly through "file sharing". So, definitely not talking about Disk Utility repair here, please. But regardless of how the file got wrong permission, I know of two bad ways to fix them. One is CMD+I and the other is chown / chmod. The command line isn't all bad but isn't practical either. Some times it's just 1 file I need to repair, sometimes it's a bunch of them. By "repair" I mean 644 for files, 755 for folders, and current user:group for all of them. Isn't there any app / script / automator out there to do that?

    Read the article

  • File apparently doesn't exist when attempting to delete it

    - by Alex Yan
    A month or so back, I untarred the Linux source in a folder in Cygwin (I was curious as to whether or not it would compile with MinGW 'cause my other computer running Linux is a slow single core Sempron). I tried deleting it, but there's 1 file left, and it will not delete... Cygwin resides in C:\cygwin, and I untarred the source in C:\cygwin\src\linux-3.7.1. It didn't compile... So I tried deleting the folder. It was going well, until at the end, when I realized not all files are deleted. I tried deleting linux-3.7.1 folder again, and an error popped up: I opened the folder, and found that there's 1 source file left: aux.c, which is in C:\cygwin\src\linux-3.7.1\drivers\gpu\drm\nouveau\core\subdev\i2c\aux.c. It will not: Delete Open Move General properties: Security properties: How do I remove this file?

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • how to run an AFS file server on a specific ethernet card (in Debian)

    - by listboss
    I have a linux box running Debian server with minimal number of packages (so no GUI for network management). The box has two ethernet cards, one of which (eth0) is connected to a Mac OSX computer using a cross-cable. I can bring up eth0 and assign a static ip (10.10.11.16) to it. This way I can ssh to the box through the cross-cable. This is what I run on Linux box: ifconfig eth0 10.10.11.16 netmask 255.255.255.0 up I also installed/started a file server (AFS) on Debian. So far, the file server can only be accessed through eth1 which is exposed to my home LAN and www. My goal is to set up the file server so that it's only visible through eth0. Is this possible? and if yes, how can I do it?

    Read the article

  • Why is my hosts file not working?

    - by elliot100
    I've been using the hosts file to for local website development, and it's recently stopped working. No entries other than localhost resolve. I've simplified to test, so it now contains only 127.0.0.1 localhost ::1 localhost 127.0.0.1 test.dev localhost responds to ping, test.dev does not. The file is called hosts with no extension It has no trailing spaces It's saved in C:\WINDOWS\System32\drivers\etc which matches the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Oddly, despite UAC being on, I can edit, delete and save the file without admin permissions No proxy is being used, PC is not connected to network for testing Stopping the DNS Client service seemed to resolve the issue for a few minutes, test.dev briefly resolved but doesn't any more. Only firewall is Windows' Machine has been restarted. Is there anything else I should try?

    Read the article

  • How to download a url as a file?

    - by Michelle
    A website url has "hidden" some mp3 files by embedding them as shockwave files, as follows: <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The StackOverflow Method Create a new local html file with just the links eg <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The SuperUser Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter url in field. Are there any other ways?

    Read the article

  • apache/debian squeeze server loading directory listing instead of website

    - by Diego
    when you navigate to mywebsite.com/ you see an apache page showing a folder called mywebsite.com/, clicking there then takes me to mywebsite.com/mywebiste.com which doesn't exist, so wordpress shows me the a 404 error. I'm trying to host a wordpress site at mywebsite.com/ but I think I have some kind of directory listing wrong somewhere, though I'm pretty sure I've set up my /etc/apache2/sites-available/mywebsite.com correctly: <VirtualHost *:80> ServerName mywebsite.com ServerAdmin [email protected] DocumentRoot /var/www/mywebsite.com/ <Directory /> Options FollowSymLinks AllowOverride All </Directory> ErrorLog /var/log/apache2/error.log CustomLog /var/log/apache2/access.log combined LogLevel warn </VirtualHost>

    Read the article

  • Linux - File was deleted and then reappeared when folder was zipped

    - by davee9
    Hello, I am using Backtrack 4 Final, which is a Linux distro that is Ubuntu based. I had a directory that contained around 5 files. I deleted one of the files, which sent it to the trash. I then zipped the directory up (now containing 4 files), using this command: zip -r directory.zip directory/ When I then unzipped directory.zip, the file I deleted was in there again. I couldn't believe this, so I zipped up the directory again, and the file reappeared again but this time could not be opened because the operating system said it didn't exist or something. I don't remember the exact error, and I cannot make this happen again. Would anyone happen to know why a file that was deleted from a directory would reappear in that directory after it was zipped up? Thank you.

    Read the article

  • Expression web ftp: Stuck at "Listing subsites"

    - by FrankPython
    When I try to use the Expression Web 4 built-in ftp I see the message ""Listing subsites in.." and soon afterward "passive ftp not available". If I switch to active, I get "active ftp not available". There are no subsites. It is a simple directory with one html page. Backend is a normal IIS6 server. FTP to the same IP with other FTP clients works fine! Any idea if Expression web has some specific requirements? It is our own dedicated server. (Please no tips to use another tool, for this specific project Expression Web is a requirement).

    Read the article

  • Opening NBF backup file?

    - by ellisgeek
    I have a backup file from before i reinstalled windows but am unable to open it because the file is a NBF. It was created with Acer Backup Manager which is a proprietary version of NTI's backup software. is there any way to open this? I have tried using NTI Backup Now! 4.x but it says the file is invalid. Acer Backup Manager will only let me restore the ENTIRE image (not what I want), and many hours of googling have left me empty handed.

    Read the article

  • Batch edit (not rename) file properties in windows

    - by Jay
    I have a large directory of downloaded shareware. I keep track of what i have by individually editing the properties of each program. However, some of the programs are multipart .rar types. And i have at least a few hundred programs so far. I am looking for a utility that will let me batch edit file properties such as Title, Author, Summary, and Comments, so I don't have to edit each file or file part individually. Windows doesn't let me do this in Explorer. Powerdesk has a proprietary system, but it isn't preserved when moving or copying files. Any Suggestions?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • Firewall still blocking port 53 despite listing otherwise?

    - by Tom
    I have 3 nodes with virtually the same iptables rules loaded from a bash script, but one particular node is blocking traffic on port 53 despite listing it's accepting it: $ iptables --list -v Chain INPUT (policy DROP 8886 packets, 657K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo any anywhere anywhere 2 122 ACCEPT icmp -- any any anywhere anywhere icmp echo-request 20738 5600K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- eth1 any anywhere node1.com multiport dports http,smtp 0 0 ACCEPT udp -- eth1 any anywhere ns.node1.com udp dpt:domain 0 0 ACCEPT tcp -- eth1 any anywhere ns.node1.com tcp dpt:domain 0 0 ACCEPT all -- eth0 any node2.backend anywhere 21 1260 ACCEPT all -- eth0 any node3.backend anywhere 0 0 ACCEPT all -- eth0 any node4.backend anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 15804 packets, 26M bytes) pkts bytes target prot opt in out source destination nmap -sV -p 53 ns.node1.com // From remote server Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2011-02-24 11:44 EST Interesting ports on ns.node1.com (1.2.3.4): PORT STATE SERVICE VERSION 53/tcp filtered domain Nmap finished: 1 IP address (1 host up) scanned in 0.336 seconds Any ideas? Thanks

    Read the article

  • Reset Photoshop File Associations

    - by Rev
    Is there a way to reset Photoshop's file associations without having to reinstall? I had CS6 and CS5.5 installed side by side, and when I uninstalled CS5.5 it removed the file associations. I tried searching around but everyone seems to have the opposite problem (wanting to remove Photosohp's file associations). Oh, and just doing Open Width - Photoshop and setting that as default doesn't really work right. It displays the wrong icons (which really gets on my nerves). Running Windows 8 RP (but the fix should be the same as in Windows 7).

    Read the article

  • How to download a URL as a file?

    - by Michelle
    A website URL has "hidden" some MP3 files by embedding them as Shockwave files, as follows. <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The Stack Overflow Method Create a new local HTML file with just the links, for example: <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The Super User Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter URL in the field. Are there any other ways?

    Read the article

  • .htaccess - Simulating virtual host wrong link to Parent Directory in Directory Listing

    - by ?????? ?????
    I have a domain dedicated for my local server (.dev), and an .htaccess file which redirects requests like http://folder.dev/subfolder/ to /htdocs/folder/subfolder. It works great and all, except for one minor issue. When I have the Directory Listing enabled, I can access all the folders, subfolders and files properly, except when I click on the Parent Directory link, which, for example, should lead to http://folder.dev, but redirects to http://folder.dev/folder/ and consequently throws 404 not found. Similarly, if Parent Directory should link to http://folder.dev/subfolder/, it links to http://folder.dev/folder/subfolder/. Here's how my .htaccess looks like: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} !(/$|\.) RewriteRule (.*) %{REQUEST_URI}/ [R=301,L] RewriteCond %{ENV:REDIRECT_SUBDOMAIN} ="" RewriteCond %{HTTP_HOST} ^(www\.)?([a-z0-9][-a-z0-9]+)\.dev\.?(:80)?$ [NC] RewriteCond %2 !^www|ftp|mail|pop3|localhost$ RewriteCond %{DOCUMENT_ROOT}/%2 -d RewriteRule ^(.*) %2/$1 [E=SUBDOMAIN:%2,L] RewriteRule ^ - [E=SUBDOMAIN:%{ENV:REDIRECT_SUBDOMAIN}] Apart from that one thing, everything else works fine (e.g. relative links in documents etc.)

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • Windows HOSTS File

    - by jdp
    I have a network of computers and a server. I often need to edit the hosts file of all the computers to add a new entry or to change one. e.g. 192.168.1.101 temporary-internal-site How can I get all the machines (all windows) to 'import' or use a hosts file on a server in addition to the normal one. This means that I can just edit this one file with the new entry and all the machines will check it. Feel free to ask questions if you don't understand.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >