Search Results

Search found 68617 results on 2745 pages for 'file deletion'.

Page 35/2745 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • problem with the deletion of uploaded images

    - by tibin
    i got this error when i tried to delete an image which i have uploaded "Forbidden You don't have permission to access /act-photo-delete.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at www.friendsjoint.com Port 80 " what will be the reason for this. i tried changing the chmods for the files. but nothing worked. is any one have an idea??? please help me....

    Read the article

  • Seeking to a line in a file in g++

    - by Phenom
    Is there a way that I can seek to a certain line in a file to read or write data? Let's say I want to write some data starting on the 10th line in a text file. There might be some data already in the first few lines, or the file could even be empty. Is there a way I can seek directly to the line I want without having to worry about what's already in the file?

    Read the article

  • How can I run a batch file silently?

    - by Mike Pateras
    I have a batch file with some commands that I need to run with my installer, but I'd rather a console not appear (in Windows). I'm executing the batch file from a WiX installer, via a custom action. I tried adding an @ECHO OFF to the top of the file, but that didn't seem to do anything. Is there a way that I can run this batch file silently?

    Read the article

  • Check directory for files, retrieve first file

    - by Lowgain
    I'm writing a small ruby daemon that I am hoping will do the following: Check if a specific directory has files (in this case, .yml files) If so, take the first file (numerically sorted preferrably), and parse into a hash Do a 'yield', with this hash as the argument What I have right now is like: loop do get_next_in_queue { |s| THINGS } end def get_next_in_queue queue_dir = Dir[File.dirname(__FILE__)+'/../queue'] info = YAML::load_file(queue_dir[0]) #not sure if this works or not yield info end I'd like to make the yield conditional if possible, so it only happens if a file is actually found. Thanks!

    Read the article

  • file sharing over external and internal networks

    - by pradvk
    Dear friends, Please tell me whether it is possible for someone on the external network to see the network folder shares of an internal LAN network in DHCP (192.168.x.x) through a Comp-A on an external network in DHCP (123.123.x.x) both networks are on different subnets but Comp-A has access to both networks and can see all the shared drives of both the internal and external networks using firewall is not an option because file sharing is required everywhere. care is taken from viruses/trojans... remote desktop etc is disabled on Comp-A Please let me know. Thanks and Regards pk

    Read the article

  • Java: Check if file is already open

    - by hello_world_infinity
    I need to write a custom batch File renamer. I've got the bulk of it done except I can't figure out how to check if a file is already open. I'm just using the java.io.File package and there is a canWrite() method but that doesn't seem to test if the file is in use by another program. Any ideas on how I can make this work?

    Read the article

  • How can I reset windows 7 file permissions?

    - by ssb
    I looked at this post and it seemed to be close to what I want, but my case might be a little worse: How can I reset my windows 7 file permissions to a rational state? Basically a while back I (very stupidly) changed the permissions on all sorts of system folders, and eventually rendered my computer virtually unusable. I managed to hack administrator privileges back onto key folders and getting it working, but in doing so I only modified permissions a lot more away from the natural state. I'm looking at this icacls stuff, but ultimately I need to reset EVERYTHING back to what it was in The Beginning, before I messed with it, from the C: directory all the way down. Right now application data is what's giving me problems, and I can't get it to work no matter how much I fiddle with those specific permissions. I will be forever grateful for help on how to do this without having to reformat.

    Read the article

  • Setup a local file server for two networks

    - by rzlines
    Hi I would like to setup a local linux file server (using centos and samba) that can be accessed by two independent local networks in the same house. I have 2 networks on the same floor which have Windows 7 computers, but the networks are split as we have 2 internet connections. How do I go about this? Additional info: Currently both the networks use DropBox to send files to each other but that happens via the Internet and hence its slow. Would like to achieve the same locally to increase the speed.

    Read the article

  • Solution for file store needing large number of simultaneous connections

    - by Tennyson H
    So I'm fairly new to large-scale architectures. We're currently using linode instances for our project, but we're brainstorming about scaling. We need a file store system than can deliver ~50mb folders (user data) to our computing instances in a reasonable amount of time (<20 sec), and scale to 10000+ total users, and perhaps 100+ simultaneous transfers. We are also unsure whether to network mount (sshfs/nfs) or just do a full transfer store-instance at the beginning and rsync instance- store at the end. I've experimented with SSH-FS between our little Linode instances but it seems to be bottlenecked at 15mb/s total bandwith, which wouldn't do under 10+ transfer stress let alone scale v. large. I also tried to investigate NFS but couldn't get it working but have little hope that it'll do within our linode network. Are there tools on other cloud providers that match our needs? Should we be mounting, or should we be transferring? Thanks very much!

    Read the article

  • Naming a file downloaded from url in iPhone

    - by hgpc
    I would like to save a file downloaded from the internet in iPhone. Can I use the url as the file name? If not, what transformation should I apply to the url to obtain a valid file name? I need to find the local copy of the file later using its url.

    Read the article

  • mysql codeigniter active record m:m deletion

    - by sea_1987
    Hi There, I have a table 2 tables that have a m:m relationship, what I can wanting is that when I delete a row from one of the tables I want the row in the joining table to be deleted as well, my sql is as follow, Table 1 CREATE TABLE IF NOT EXISTS `job_feed` ( `id` int(11) NOT NULL AUTO_INCREMENT, `body` text NOT NULL, `date_posted` int(10) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; Table 2 CREATE TABLE IF NOT EXISTS `job_feed_has_employer_details` ( `job_feed_id` int(11) NOT NULL, `employer_details_id` int(11) NOT NULL, PRIMARY KEY (`job_feed_id`,`employer_details_id`), KEY `fk_job_feed_has_employer_details_job_feed1` (`job_feed_id`), KEY `fk_job_feed_has_employer_details_employer_details1` (`employer_details_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; So what I am wanting to do is, if the a row is deleted from table1 and has an id of 1 I want the row in table to that also has that idea as part of the relationship also. I want to do this in keeping with codeigniters active record class I currently have this, public function deleteJobFeed($feed_id) { $this->db->where('id', $feed_id) ->delete('job_feed'); return $feed_id; }

    Read the article

  • Oracle checking existence before deletion in a trigger

    I have analyzed a hibernate generated oracle database and discovered that a delete of a row from a single table will spawn the firing of 1200+ triggers in order to delete the related rows in child tables. The triggers are all auto-generated the same - an automatic delete of a child row without checking for existence first. As it is impossible to predict which child tables will actually have related rows, I think a viable solution to preventing the firing of the cascaded delete down a deeply branched completely empty limb, would be to check for the existence of a related row before attempting to delete. In other dbms', I could simply state " if exists....." before deleting. Is there a comparable way to do this in oracle?

    Read the article

  • C++ write to front of file

    - by user231536
    I need to open a file as ofstream and write to the front of the file, while preserving the remaining contents of the file, which will be "moved". Similar to "prepend" a file. Is this possible using the STL or boost ?

    Read the article

  • Settings what-opens-what once and for all (Backing up File Associations)

    - by ldigas
    Every time I switch machines (as in, get a new one, or reinstall an OS or something like that) my precious file associations get lost. And the next six months pass slowly until I again set them up right. Is there a program that allows me to: Set all the extensions I would like to open with let's say, Vim, without setting each one of them individually. Something of a kind: Vim opens: .... list of extensions ... and/or A program which lets me backup my current settings, and when I copy those to a new machine it lets me just modify the paths where I putted the applications in question, and it does the rest (again, associates that program with all the extensions it opened before).

    Read the article

  • Error while executing (.exe ) from windows command script (.cmd) file

    - by mahesh
    I have the following syntax in the .cmd file, where PathList is console application with .exe as extension. cd D:\Sample D: PathList 2> file.txt This syntax works fine if the file is saved with .bat as extension, but if save it with .cmd extension it throws error saying 'PathList' is not recognized as an internal or external command, operable program or batch file. Please can i know what is the issue with saving it with .cmd extension

    Read the article

  • safe dereferencing and deletion

    - by serejko
    Hi, I'm relatively new to C++ and OOP in general and currently trying to make such a class that allows to dereference and delete a dead or invalid pointer without any care of having undefined behavior or program fault in result, and I want to ask you is it a good idea and is there something similar which is already implemented by someone else? or maybe I'm doing something completely wrong? I've just started making it and here is the code I currently have: template<class T> class SafeDeref { public: T& operator *() { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) return *this; return theDefaultObject; } T* operator ->() { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) return this; return &theDefaultObject; } void* operator new(size_t size) { void* ptr = malloc(size * sizeof(T)); if (ptr != 0) theStore.insert(reinterpret_cast<T*>(ptr)); return ptr; } void operator delete(void* ptr) { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) { theStore.erase(it); free(ptr); } } protected: static bool isInStore(T* ptr) { return theStore.find(ptr) != theStore.end(); } private: static T theDefaultObject; static hash_set<T*> theStore; }; The idea is that each class with the safe dereference should be inherited from it like this: class Foo : public SafeDeref<Foo> { void doSomething(); }; So... Any advices? Thanks in advance. P.S. If you're wondering why I need this... well, I'm creating a set of native functions for some scripting environment, and all of them use pointers to internally allocated objects as handles to them and they're able to delete them as well (input data can be wrong), so this is kinda protection from damaging host application's memory And I really sorry for my bad English

    Read the article

  • MySQL log files deletion

    - by aneez
    I have a master and slave database running on different nodes. The master DB is subjected to huge no. of inserts/updates. The master DB size is close to 6 GB, while the log files are now occupying a space of more than 120 GB. I am running out of disk space and need to get rid of the log files. Will deleting the log files in anyway affect the slave DB ? Presently, the slave is just a couple of seconds behind the master. Is there someplace where I can see what steps I need to follow to delete those files eg. 1)Shut down the slave 2)Shut down the master 3)Delete the log files 4)Start the Master 5)Start the Slave Do I need to inform the slave that the log files have been deleted ?? If yes, what is the way to do it ? Any help would be appreciated. Thanks

    Read the article

  • Nautilus file share for multiple users is not working. Only owner gets access.

    - by Niklas
    I have always had trouble setting up samba shares with ubuntu. In the past I've tried getting it to work by configuring /etc/samba/smb.conf but never achieved what I wanted. Last time I managed to get it working by making a share with nautilus built in file sharing (which utilises samba). Now when I try do it again I doesn't work. (running ubuntu 10.10 Desktop x64) What I'm trying to achieve is a share which is available for multiple users (those who are in the same group) and not just the owner (who also is included in the group). As it is now I can connect with only the owner, the others are getting an error when I try to connect with windows 7. All the users are within the same group and the folder permissions are 770. The files and folders have the correct group settings. I think there is no restrictions in the User Settings for the other users blocking them and I marked "make available to other users (or whatever it says)" in the file sharing dialog. What can I do?

    Read the article

  • What is the best practice for reading a large number of custom settings from a text file?

    - by jawilmont
    So I have been looking through some code I wrote a few years ago for an economic simulation program. Each simulation has a large number of settings that can be saved to a file and later loaded back into the program to re-run the same/similar simulation. Some of the settings are optional or depend on what is being simulated. The code to read back the parameters is basically one very large switch statement (with a few nested switch statements). I was wondering if there is a better way to handle this situation. One line of the settings file might look like this: #RA:1,MT:DiscriminatoryPriceKDoubleAuction,OF:Demo Output.csv,QM:100,NT:5000,KP:0.5 //continues... And some of the code that would read that line: switch( Character.toUpperCase( s.charAt(0) ) ) { case 'R': randSeed = Integer.valueOf( s.substring(3).trim() ); break; case 'M': marketType = s.substring(3).trim(); System.err.println("MarketType: " + marketType); break; case 'O': outputFileName = s.substring(3).trim() ; break; case 'Q': quantityOfMarkets = Integer.valueOf( s.substring(3).trim() ); break; case 'N': maxTradesPerRound = Integer.valueOf( s.substring(3).trim() ); break; case 'K': kParameter = Float.valueOf( s.substring(3).trim() ); break; // continues... }

    Read the article

  • Delphi7 - How can i copy a file that is being written to

    - by Simon
    I have an application that logs information to a daily text file every second on a master PC. A Slave PC on the network using the same application would like to copy this text file to its local drive. I can see there is going to be file access issues. These files should be no larger than 30-40MB each. the network will be 100MB ethernet. I can see there is potential for the copying process to take longer than 1 second meaning the logging PC will need to open the file for writing while it is being read. What is the best method for the file writing(logging) and file copying procedures? I know there is the standard Windows CopyFile() procedure, however this has given me file access problems. There is also TFileStream using the fmShareDenyNone flag, but this also very occasionally gives me an access problem too (like 1 per week). What is this the best way of accomplishing this task? My current File Logging: procedure FSWriteline(Filename,Header,s : String); var LogFile : TFileStream; line : String; begin if not FileExists(filename) then begin LogFile := TFileStream.Create(FileName, fmCreate or fmShareDenyNone); try LogFile.Seek(0,soFromEnd); line := Header + #13#10; LogFile.Write(line[1],Length(line)); line := s + #13#10; LogFile.Write(line[1],Length(line)); finally logfile.Free; end; end else begin line := s + #13#10; Logfile:=tfilestream.Create(Filename,fmOpenWrite or fmShareDenyNone); try logfile.Seek(0,soFromEnd); Logfile.Write(line[1], length(line)); finally Logfile.free; end; end; end; My file copy procedure: procedure DoCopy(infile, Outfile : String); begin ForceDirectories(ExtractFilePath(outfile)); //ensure folder exists if FileAge(inFile) = FileAge(OutFile) then Exit; //they are the same modified time try { Open existing destination } fo := TFileStream.Create(Outfile, fmOpenReadWrite or fmShareDenyNone); fo.Position := 0; except { otherwise Create destination } fo := TFileStream.Create(OutFile, fmCreate or fmShareDenyNone); end; try { open source } fi := TFileStream.Create(InFile, fmOpenRead or fmShareDenyNone); try cnt:= 0; fi.Position := cnt; max := fi.Size; {start copying } Repeat dod := BLOCKSIZE; // Block size if cnt+dod>max then dod := max-cnt; if dod>0 then did := fo.CopyFrom(fi, dod); cnt:=cnt+did; Percent := Round(Cnt/Max*100); until (dod=0) finally fi.free; end; finally fo.free; end; end;

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Spurious alleged file corruption with an SSD

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >