Search Results

Search found 70199 results on 2808 pages for 'file monitoring'.

Page 42/2808 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • repeated entries in website log file

    - by Reza
    I am writing an ad hoc log analyser for my website log file. The following is part of the log file in which it shows file1.pdf has been downloaded twice. Looking carefully, the time stamp and IP address are exactly the same in both entries. How can it be possible to have 2 downloads at the same time by the same person. Should I count it as 2 in my programme or as 1? Any reply is appreciated. name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 3706 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)" name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 425462 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)"

    Read the article

  • Update in grub.cfg file for loading new kernel image in Ubuntu 11.04

    - by user1627657
    I am new to Linux. I am compiling Linux kernel (ver: 2.6.34.12) in gcc in traditional manner in VMware machine in Ubuntu (kernel version - 2.6.38-8-generic) 11.04 version. I am unable to find, where to update about newly compiled kernel in the grub.cfg file. I updated the newly created image version name in the existing image. Then VMware didn't able to load new kernel. I have searched in internet but I didn't find. So anyone can help me, to update in the grub.cfg and to successfully load new kernel. Few things about what I have done: Make bzImage to create image file. Make modules_install && make install to install modules and then sudo mkinitramfs -o initramfs.img-2.6.34 2.6.34. Then sudo gedit grub.cfg. In that at the mementry I updated the version of vmlinuz and initrd from 2.6.38-8 to 2.6.34.12. This is I have done.

    Read the article

  • Load/Store Objects in file in Java

    - by brain_damage
    I want to store an object from my class in file, and after that to be able to load the object from this file. But somewhere I am making a mistake(s) and cannot figure out where. May I receive some help? public class GameManagerSystem implements GameManager, Serializable { private static final long serialVersionUID = -5966618586666474164L; HashMap<Game, GameStatus> games; HashMap<Ticket, ArrayList<Object>> baggage; HashSet<Ticket> bookedTickets; Place place; public GameManagerSystem(Place place) { super(); this.games = new HashMap<Game, GameStatus>(); this.baggage = new HashMap<Ticket, ArrayList<Object>>(); this.bookedTickets = new HashSet<Ticket>(); this.place = place; } public static GameManager createManagerSystem(Game at) { return new GameManagerSystem(at); } public boolean store(File f) { try { FileOutputStream fos = new FileOutputStream(f); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(games); oos.writeObject(bookedTickets); oos.writeObject(baggage); oos.close(); fos.close(); } catch (IOException ex) { return false; } return true; } public boolean load(File f) { try { FileInputStream fis = new FileInputStream(f); ObjectInputStream ois = new ObjectInputStream(fis); this.games = (HashMap<Game,GameStatus>)ois.readObject(); this.bookedTickets = (HashSet<Ticket>)ois.readObject(); this.baggage = (HashMap<Ticket,ArrayList<Object>>)ois.readObject(); ois.close(); fis.close(); } catch (IOException e) { return false; } catch (ClassNotFoundException e) { return false; } return true; } . . . } public class JUnitDemo { GameManager manager; @Before public void setUp() { manager = GameManagerSystem.createManagerSystem(Place.ENG); } @Test public void testStore() { Game g = new Game(new Date(), Teams.LIONS, Teams.SHARKS); manager.registerGame(g); File file = new File("file.ser"); assertTrue(airport.store(file)); } }

    Read the article

  • How to search for duplicate values in a huge text file having around Half Million records

    - by Shibu
    I have an input txt file which has data in the form of records (each row is a record and represents more or less like a DB table) and I need to find for duplicate values. For example: Rec1: ACCOUNT_NBR_1*NAME_1*VALUE_1 Rec2: ACCOUNT_NBR_2*NAME_2*VALUE_2 Rec3: ACCOUNT_NBR_1*NAME_3*VALUE_3 In the above set, the Rec1 and Rec2 are considered to be duplicates as the ACCOUNT NUMBERS are same(ACCOUNT_NBR1). Note: The input file shown above is a delimiter type file (the delimiter being *) however the file type can also be a fixed length file where each column starts and ends with a specified positions. I am currently doing this with the following logic: Loop thru each ACCOUNT NUMBER Loop thru each line of the txt file and record and check if this is repeated. If repeated record the same in a hashtable. End End And I am using 'Pattern' & 'BufferedReader' java API's to perform the above task. But since it is taking a long time, I would like to know a better way of handling it. Thanks, Shibu

    Read the article

  • Elevating UAC via .bat file?

    - by jslaker
    Pretty straightforward one that I'm having trouble finding an answer to. serverfault previously helped me with finding a way to automate Windows updates without using WSUS. It's working fantastically, but to run it over the network, you have to first mount a shared drive. That's pretty simple XP since you just mount the drive and run the updater. On Vista and W7, though, this all has to be done with elevated privileges to work correctly. The UAC account can't see network drives mounted by the regular user, so in order to get everything working, I have to mount the share via net use from an escalated shell. I'd like to automate mounting this share and launching the updater via a simple .bat file. I could probably just instruct everybody to right click "Run as Administrator" on the .bat file, but I'd like to keep things as simple as possible and have the .bat automatically prompt the user to escalate their privileges. Since these computers don't belong to us, I can't count on anything like Powershell being installed, so that rules any solution along those lines out and pretty much have to rely on things that would be included in an RTM Vista install. I'm hoping I'm mostly missing something obvious here. :)

    Read the article

  • how check all of my file copy correctly by batch file?

    - by rima
    Dear all friends I have a batch file that copy all the files from src place to dest place. I used xcopy command. Now I want to make sure all of my file copy correctly and delete all the files in src folder, do you have any idea? I dont know is there any command for delete the folder with all the files and folder inside it? please advise me.... my source folder has below structure > root | > [sub folder1] > | > filex.s > filei.z > [sub folder2] > | > filep.a > fileq.q > [sub folder3] > | > filex.s > filei.z > filsi.w > file1.xx > file2.cc > file3.ss

    Read the article

  • Java File manipulation

    - by user69514
    So I have an application with a JFileChooser from which I select a file to read. Then I change some words and write a new file. The problem that I am having is that when I write the new file it's saved in the project directory. How do I save it in the same directory as the file that I chose using the JFileChooser. Note: I don't want to use the JFileChooser to choose the location. I just need to save the file in the same directory as the original file that I read.

    Read the article

  • In which document do file specifications belong?

    - by Andrew
    In which document would a file specification belong? Perhaps this file is used as an input to a third-party system. Would it belong in its own document? Or would it be better to put it in the functional or design spec? Or somewhere else? When I say file specification, I mean a description of what format the file is (CSV, fixed width, etc), columns, data types, etc. Also, where should you document how the file is generated? i.e. business rules/algorithms which are used to generate the file.

    Read the article

  • Windows batch/script code to conditionally process based on date value in text file

    - by CarolinaJay65
    I am using Windows XP. The code can be used in a batch file or vbs script. I intend to use Windows scheduler to run the program. I need code to read a date from a text file (could be the only line in the text file or the date could be included in the filename, I control the process that generates the file) The code would then need to evaluate the text file date against the current date to confirm that the text file date is from the prior month. I'm starting to build a process to be able to run 1st-of-the-month jobs once the monthly data has been refreshed. I'm new to building this kind of process using batch/script files. Thanks for your time

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • What file manager/uploader do you use with your embeded wysiwyg editor ?

    - by Wookai
    I am looking for a wysiwyg editor to allow user to edit part of their website. I already worked with TinyMCE, and heard about CKEditor, which both look great. However, they both lack a free (PHP) file manager/uploader, as they sell their own tool to make some money. I found a few free alternatives for TinyMCE, the best one (for my needs) being PHP Letter's one, but I was wondering about what the community uses ? Do you buy the "official" file manager ? Do you code your own ? Or do you have a great (free) alternative ?

    Read the article

  • What is the ideal file size for a web page? [closed]

    - by Rob
    Possible Duplicate: Is there a maximum size that web pages should be kept under? What is the ideal file size for a web page? Specifically when it comes to image sizes, what's the total file size for a webpage which includes several images. I tend to compress images down as much as possible before it starts to visually lose quality. We run several CMS website's and the clients tend to ask this question a lot! I'd love to hear another view on it.

    Read the article

  • Uploading multiple files asynchronously by blueimp jquery-fileupload

    - by Ryo
    I'm using jQuery File Upload library (http://github.com/blueimp/jQuery-File-Upload), and I've been stuck figuring out how to use the library satisfying the following conditions. The page has multiple file input fields surrounded by a form tag. Users can attach multiple files to each input field All files are sent to a server when the button is clicked, not when files are attached to the input fields. Upload is done asynchronously Say the page has 3 input fields with their name attributes being "file1[]", "file2[]" and "file3[]", the request payload should be like {file1: [ array of files on file1[] ], file2: [ array of files on file2[] ], ...} Here's jsFiddle, it's behaving weird so far in that it sends post request twice and the first one is cancelled. http://jsfiddle.net/BAQtG/24/ The core part of js code looks like this. $(document).ready(function(){ var filesList = [] var elem = $("form") file_upload = elem.fileupload({ formData:{extra:1}, autoUpload: false, fileInput: $("input:file"), }).on("fileuploadadd", function(e, data){ filesList.push(data.files[0]) }); $("button").click(function(){ file_upload.fileupload('send', {files:filesList} ) }) }) Anybody have idea how to get this to work? Updates Now thanks to @CBroe 's comment, the issue that request is sent twice is fixed. However the keys of request parameter is not correctly set. Here's updated jsFiddle. http://jsfiddle.net/BAQtG/27/

    Read the article

  • Download a File in a Batch File

    - by Cristian
    I've never done any scripting on Windows but now I need to write a batch file that downloads a file off the internet (amongst other things). If it was linux I would use wget. Is there a builtin executable that will download a file to a given directory? This needs to run on XP.

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Windows: File copy/move with filename regular expressions?

    - by Ian Boyd
    i basically want to run: C:\>xcopy [0-9]{13}\.(gif|jpg|png) s:\TargetFolder /s i know xcopy doesn't support regular-express filename searches. i can't find out how to find out if PowerShell has a Cmdlet to copy files; and if it does, how to find out if it supports regular expression filename matching. Can anyone think of a way to perform a recursive file copy/move with regex filename matching?

    Read the article

  • Duplicate file finder

    - by Andrija
    I need free duplicate file finder/remover app, with ability to find duplicate files/folders by name and/or by size and to remove one of duplicates. Can you please recommend any? and why? Thanks EDIT: Changed to CW. Please add more apps to list if you know any.

    Read the article

  • File associations in windows 8

    - by soandos
    When I set a program to open a specific file type by default, it works (like Word 2013 for PDFs). However, when I change the default back to reader, I lose Word as an option that I can pick in the open with sub-menu. How can I get it to stick? Update: It seems that there is something special about word. Adding Notepad++ as the default and then changing it back seems to keep notepad++ on the sub-menu. Word version is 2013

    Read the article

  • What do .# file names mean in Linux?

    - by Martin Wiboe
    This is probably trivial, but I'm quite new to Linux and I was unable to find any info online. In a folder, I can execute the command find . -regex '.*py' and get the following result: ./.#netMHC3.2.py Is this a file in the current directory? What can I do to display its contents? Thank you

    Read the article

  • File Transfer with a double SSH login.

    - by Harpal
    I'm have trouble transfering files again from my work PC, which is a linux machine to my home windows PC. My work has changed it so I now need to SSH twice before I can access my PC. So I need to: ssh [email protected] password: xxxxx I then need to do it again. ssh computer_name password: xxxxx I've tried accessing directly via my computers IP but to of no avail. Is there a way I can use pscp or file zilla to ssh twice so I can transfer files?

    Read the article

  • Exchange backup size larger then file size

    - by bladefist
    My backupexec is setup to integrate with exchange, to backup the information store, versus just backing up the data file path. My exchange mdbdata folder is 17 gigs. But my backup exec is backing up 40 gigs worth of data. I have gone through it a million times, and it's strictly backing up exchange information store. I deleted all my backups and started over, to clear the incremental backup old data. Where is all this extra data coming from?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >