Search Results

Search found 69503 results on 2781 pages for 'file listing'.

Page 195/2781 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Read a local file

    - by user246114
    Hi, Is there no way for javascript hosted on a webserver to read a file on a client's local machine? (this has obvious security risks). I guess I'm wondering if there's any access granting a user can do, like drag and dropping a file into the browser, or explicitly selecting a file from a popup to get around this? I know flash 10 allows reading of a local file, just wondering if there were any method to do this in javascript. Thanks

    Read the article

  • How to Retrieve a File's "Product Version" in VBScript

    - by Aaron Alton
    I have a VBScript that checks for the existance of a file in a directory on a remote machine. I am looking to retrieve the "Product Version" for said file (NOT "File Version"), but I can't seem to figure out how to do that in VBScript. I'm currently using Scripting.FileSystemObject to check for the existence of the file. Thanks much.

    Read the article

  • Log rotation with automatic *.log file discovery

    - by Mikko Ohtamaa
    I am hosting several websites which each of run their own Python process and write *.log output files, but the directory structure is not standardized. Example: -rw-r--r-- 1 plone plone 125M 2012-08-29 11:35 ./x/var/log/instance-Z2.log -rw-r--r-- 1 plone plone 19M 2012-08-29 00:07 ./zope2.9/y/log/event.log -rw-r--r-- 1 plone plone 188M 2012-08-13 00:09 ./zope2.9/y/log/Z2.log -rw-r--r-- 1 plone plone 137M 2010-11-16 09:41 ./zope2.9/y/log/event.log I'd like to make log rotate autodiscovery these log files and run a log rotation on them, as opposite to manually type in every log file to logrotate conf. Does any existing tools offer this kind of log file discovery and rotation capabilities, without manually specifying each file? If not... then just write a shell script which generates the logrotate conf?

    Read the article

  • Check if NSURL is Local File

    - by golfromeo
    This is a pretty simple question- how can I check if a NSURL is linking to a local file? I know, RTFM, but I checked the documentation and I don't seem to see any methods related to this. The only methods I did find were -isFileReferenceURL and -isFileURL, but I think these only check if the URL directly links to a file. note: I'm making an iPhone app, so by "local file" I mean a .html file stored in the project's resources. Thanks for any help in advance.

    Read the article

  • Are has collisions with different file sizes just as likely as same file size?

    - by rwmnau
    I'm hashing a large number of files, and to avoid hash collisions, I'm also storing a file's original size - that way, even if there's a hash collision, it's extrememly unlikely that the file sizes will also be identical. Is this sound (a hash collision is equally likely to be of any size), or do I need another piece of information (if a collision is more likely to also be the same length as the original). Or, more generally: Is every file just as likely to produce a particular hash, regardless of original file size?

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Will unbinding a server to an Open Directory Master remove its own file shares

    - by scape
    According to this article: http://support.apple.com/kb/TS3180?viewlocale=en_US I am required to remove the ldap binding of my second Mac OS X Lion server before I set it up as a replica server. I initially set the server up as a replica, or so I thought, and created file shares (it refers to the first server's ACL) before I realized it was never promoted as a replica server. So as of now it's running and shares files with correct ACL permissions but if the Master goes down all the file shares seize up. I want to set it up as a replica so this is not an issue; however, I don't want to lose the file shares and their permissions as I remove the binding and restart the server-- apparently I must remove the ldap binding to the OD Master (also a Mac OS X Lion server) before setting it up as a replica.

    Read the article

  • Makefile fails to install file correctly, installing HPL

    - by zarose
    I started installing HPL a while ago, and had a related question. I've been following along with this guide from Intel. I figure this warrants a whole new one. When I try to make the archive, the output seems fine until the end, where it gives an error. make[2]: Entering directory `/hpl-2.0/src/auxil/intel64' Makefile:47: Make.inc: No such file or directory make[2]: *** No rule to make target `Make.inc'. Stop. make[2]: Leaving directory `/hpl-2.0/src/auxil/intel64' make[1]: *** [build_src] Error 2 make[1]: Leaving directory `/hpl-2.0' make: *** [build] Error 2 Going to the directory /hpl-2.0/src/auxil/intel64 shows a file, "Make.inc", but it's highlighted red, and the white text blinks. Is there a way to manually make that file? What do I need to do to get the makefile to do this for me?

    Read the article

  • Questions about linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

  • need to run command against multiple lines in file that start with ica-tcp

    - by Nick Parsells
    I want to run a command on each line of a file I have, however its a bit more complicated then I originally thought. The file contents look like this typically; however there are sometimes more connections: SESSIONNAME USERNAME ID STATE TYPE DEVICE services 0 Disc console 1 Conn t-rpal 48 Disc ica-tcp#0 bpofiretest 50 Active wdica rdp-tcp#2 a-nparsells 51 Active rdpwd ica-tcp 65536 Listen rdp-tcp 65537 Listen The command I want to run is reset session ica-tcp#0. I also want to run the same command on any additional connections that start with ica-tcp that the scripts finds in the file. How can I write a script like that in powershell? thanks!

    Read the article

  • opening offline sync files from a .CAB file

    - by Rob
    OK, I have downloaded from Windows Live Spaces (don't know if this is useful, but might be) a .CAB file containing an Index.XML file and package.cab, package01.cab through to package12.cab. The index.XML simply has names of all the subsequent package.cab files and their offsets. The first package.cab has a single 26MB XML file which appears to be an OfflineSyncFile definition which I am guessing is the meta data for all the other packageXX.cab files. Now the question I have is how should i be going about extracting these things and piecing it all back together again. I have tried WinRAR, which extracts all 800MB for me into unnamed files and randomly named directories. I have also tried the standard extract in Windows Explorer with much the same resusts.

    Read the article

  • non-interactively upload file to sftp server, using password

    - by matt
    Hello Guys, I know, this is not the recommended way to do this. But, I do not have another choice: I've got to set up a cron job that will regularly upload a file to an external SFTP Server (no FTP available, and I do only have a username/password for it but no key.) Still, I need to set up a cron that will regularly connect to that sftp and upload a file. sftp <<EOF put filename exit EOF therefore will not work, because sftp asks for the password, before STDIN is evaluated. What can I do, to pass the Password to sftp? Again: I am aware of the potential security risk, but I really do not have any choice here, and the server from which the file is uploaded is protected rather well.

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode

    - by George
    I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Finding the file that is on a bad block on a HFS+ volume (debugfs for HFS+)

    - by Blair Zajac
    I have a drive in our iMac that has bad blocks, as booting from an Ubuntu 11.10 live CD and using ddrescue -f /dev/sda /dev/null finds them. I'd like to get the drive to remap them by writing to the blocks, say using hdparm --write-sector, but I don't want to do this without knowing what's in those blocks and finding the file that owns them, so I can restore the file from another source. I found fileXray but don't feel like spending $79 to map a block to a file and hfsdebug has been taken offline. Are there suggestions on a tool or technique to use? I looked at all the Ubuntu HFS+ packages to see if they could provide this info but nothing jumped out at me. BTW, I used Disk Utility to erase the empty space, but it didn't get any of the bad blocks to be remapped, according to smartctl -A.

    Read the article

  • Bash script to replace spaces in file names

    - by armandino
    Can anyone recommend a safe solution to recursively replace spaces with underscores in file and directory names starting from a given root directory? For example, $ tree . |-- a dir | `-- file with spaces.txt `-- b dir |-- another file with spaces.txt `-- yet another file with spaces.pdf becomes $ tree . |-- a_dir | `-- file_with_spaces.txt `-- b_dir |-- another_file_with_spaces.txt `-- yet_another_file_with_spaces.pdf

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • Access is denied while moving a file from different volumes

    - by logeeks
    i have a portable 500GB HDD plugged into my dell xps system. The system have windows 7 professional edition. the problem is that when i try to open a file(visual studio .sln file) it is saying that access is denied. I cannot copy this file to a different location(within my local HDD) it is saying that i need permission for the task to complete. I've checked and confirmed the following things 1) I've logged into an admin account before attempting these operations 2) My admin account have 'Full Control' 3) I've full control over the portable HDD 4) I changed the 'UAC' settings to 'Never notify' Please help.

    Read the article

  • Execute a Application On The Server Using JavaScript

    - by Nathan Campos
    I have an application on my server that is called leaf.exe, that haves two arguments needed to run, they are: inputfile and outputfile, that will be like this example: pnote.exe input.pnt output.txt They are all on the same directory as my home page file(the executable and the input file). But I need that a JavaScript could run the application like that, then I want to know how could I do this.

    Read the article

  • IE8 rendering of local-files is wrong

    - by Eric
    It appears that IE8 is not rendering properly a local file: Consider this simple webpage: http://sayang.free.fr/ie8render.html (html code below) extracted from a w3c tutorial on opacity. Save it locally and display it again: the local file has no opacity! That's very annoying, especially when one wants to design complex pages on prototypes placed in local files. Do you have a solution to that ? <html> <head> <title>IE8 Local File</title> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" /> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <style type="text/css"> div.background { width: 500px; height: 250px; background: url(http://www.w3schools.com/css/klematis.jpg) repeat; border: 2px solid black; } div.transbox { width: 400px; height: 180px; margin: 30px 50px; background-color: #ffffff; border: 1px solid black; /* for IE */ filter:alpha(opacity=60); /* CSS3 standard */ opacity:0.6; } div.transbox p { margin: 30px 40px; font-weight: bold; color: #000000; } </style> </head> <body> <h2>Save this file locally and open it to see the difference</h2> <div class="background"> <div class="transbox"> <p>This is some text that is placed in the transparent box. This is some text that is placed in the transparent box. This is some text that is placed in the transparent box. This is some text that is placed in the transparent box. This is some text that is placed in the transparent box.</p> </div> </div> </body> </html>

    Read the article

  • Generate .h and .cpp from .ui file

    - by Lpcnew
    Hey guys, I have the file about.ui. As you know, inside the qt design i can do the ui.h file... But how can i make the "about.h" and the "about.cpp" from my .ui file? i have to create a .moc file too? How can i compile this after create to see if all be done correctly? Thanks from Brazil! :-) *I´m using qt 3.2

    Read the article

  • pure-ftpd: one readonly/non-deletable file in home directory

    - by Bram Schoenmakers
    Is there a way to have a file in the user's FTP home directory without the ability to modify/remove it from that directory over FTP? So the user has write permissions on his own home folder, thus the ability to remove files. An exception should be made for a single file, which has the same filename and contents for each account. The solution I'm thinking of right now to run a periodic script to check the presence of that file, and if not, put it back. But I wonder whether there's a better solution than this.

    Read the article

  • VMWare Server lck file keeps coming back

    - by muncherelli
    I am running VMWare Server 2.0 on a Debian Lenny system as a host OS. I am getting this error when I try to start a Virtual Machine Cannot open the disk '/var/lib/vmware/Virtual Machines//.vmdk' or one of the snapshot disks it depends on. Reason: Failed to lock the file. So I looked around on the web and found that I need to delete the .lck folder and file in order to get this error This seems to happen any time I reboot my Debian Server. The Virtual Machines sometimes do not recover and this lck file is causing problems. Should I create a cron script that does a rm *.lck on each of my machines on reboot? Looking for any direction on how to resolve this. It seems when i do a "reboot" command it is maybe not gracefully shutting down the VMware containers so the lock files are still intact?

    Read the article

  • Question about the linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >