Search Results

Search found 69664 results on 2787 pages for 'file copying'.

Page 28/2787 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • \n not working in my fwrite()

    - by brett
    Not sure what could be the problem. I'm dumping data from an array $theArray into theFile.txt, each array item on a separate line. $file = fopen("theFile.txt", "w"); foreach ($theArray as $arrayItem){ fwrite($file, $arrayItem . '\n'); } fclose($file); Problem is when I open theFile.txt, I see the \n being outputted literally. Also if I try to programmatically read the file line by line (just in case lines are there), it shows them as 1 line meaning \n are really not having their desired effect.

    Read the article

  • Android ACTION_SEND Attached File

    - by Sean
    When you attach a file to an e-mail using the ACTION_SEND intent (with the extra EXTRA_STREAM) does the e-mail app copy that attached file to its own location? My app creates a file and attaches it to an email, but this can happen many times and I would like to be able to delete this file when it is no longer needed (so it doesn't flood the user's storage with junk data). Is the file safe to delete after the e-mail intent has started?

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • How do I get the file size of a large (> 4 GB) file?

    - by endeavormac
    How can I get the file size of a file in C when the file size is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • visual c# open own file extention

    - by ecross
    hello everybody, first: i'm dutch so sorry if my english is not so good. I have made my own file type (.ddd) and I made a simple program to open this file type, but wenn i click on a .ddd file (on my desktop) my program opens only the file is not automaticly opend inside my program. how do I directly open the file in my program when it opens?

    Read the article

  • how can we achieve second application read that file when first application not modifying it

    - by soField
    i have two application first application is bash second is java which one of them is periodically deleting and recreating a specific file (first) the other one is also periodically reading this file and process it in it's own logic (second) how can we achieve second application read that file when first application not modifying it my aim is to force second app read the file only when content of file fully written inside it how can achieve this goal ?

    Read the article

  • uWSGI log file...permission denied to read file

    - by bkev
    I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so: Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334) Error opening file for reading: Permission denied Problem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. My Upstart conf file description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI \ --master \ --emperor /etc/uwsgi/vassals \ --die-on-term \ --auto-procname \ --no-orphans \ --logto $LOGTO \ --logdate My Vassal ini file: [uwsgi] # Variables base = /srv/env/mysiteenv # Generic Config uid = uwsgi gid = uwsgi socket = 127.0.0.1:5050 master = true processes = 2 reload-on-as = 128 harakiri = 60 harakiri-verbose = true auto-procname = true plugins = http,python cache = 2000 home = %(base) pythonpath = %(base)/mysite module = wsgi logto = /srv/log/mysite/uwsgi_error.log logdate = true

    Read the article

  • try to attach to a database file but can't browse folder which contains the file

    - by Chadworthington
    I am trying to attach to database file (*.mdf, *.ldf) that I placed in the same folder as all my other SQL Server databases. I begin the attach by attempting to browse to the folder which contains the db files as well as all of my active database files. I select "attach Database" and click the "Add" button to add a database to the list of databases to attach to. When I do so, I get this error: TITLE: Locate Database Files - BESI-CHAD ------------------------------ D:\SQLdata\MSSQL10_50.SQLBESI\MSSQL\DATA Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. ------------------------------ BUTTONS: OK ------------------------------ The path is correct and, as I mentioned, it contains all of my other database files so I wouldn't think that permissions should be an issue, but here is what I see for that folder: Any idea why I cannot browse to that folder and attach to the db files that I have place there?

    Read the article

  • copying folder and file permissions from one user to another after switching domains [closed]

    - by emptyspaces
    Please excuse the title, this was the best way I could think to describe this scenario without an entire paragraph. I am using C#. Currently I have a file server running windows server 2003 setup on a domain, we will call this oldDomain, and I have about 500 user accounts with various permissions on this server. Because of restrictions out of my control we are abandoning this domain and using another one that is more dominant within the organization, we will call this newDomain. All of the users that have accounts on oldDomain also have accounts on newDomain, but the usernames are completely different and there is no link between the two. What I am hoping to do is generate a list of all user accounts and this appropriate sid's from AD on the oldDomain, I already have this part done using dsquery and dsget. Then I will have someone go through and match all of the accounts from oldDomain to the correct username on newDomain. Ultimately leaving me with a list of sids from oldDomain and the appropriate username from newDomain. Now I am hoping to copy the file and folder permissions from the old user from oldDomain to the new user on newDomain once I join the server to newDomain. Can anyone tell me what the best way to copy permissions from the sid to the user on newDomain? There are a bunch of articles out there about copying permissions from user a to user b but I wanted to check and see what the recommended practice is here since there are a ton of directories.

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • Windows Server NTFS volume list file name encodings and any illegal file names

    - by benbradley
    I'm having to deal with a Windows Server (NTFS) file server and our backup application appears to be failing with certain files. According to this https://en.wikipedia.org/wiki/NTFS#Internals NTFS apparently supports file names encoded in UTF-16 but according to their support team, our backup application only supports UTF-8. I'd like to confirm whether this is actually the problem by seeing the file name encoding for myself. The files that are failing appear to be using plain English A-Z letters and other ASCII characters. No accents or non-English letters etc. I suppose even though the letters appear to be plain A-Z the file name could still be encoded in UTF-16. Does anyone know of a utility or script that can recursively go through all files in a directory and show the encoding of the file name? Then I could try renaming to UTF-8 to see if the backup can proceed. I'm not a Windows developer so can't write this up myself. Presumably the encoding of the file name should be stored in the FS somewhere and therefore it should be possible to expose this.

    Read the article

  • Create "raw disk file" from WIM file

    - by Joe Baltimore
    First timer here. I've searched around here, but haven't found a question like the one I have. Apologies if I missed it. The challenge at hand: produce a "raw disk image file" from a given WIM file. What I am pursuing so far is to use imagex.exe with the "/apply" operation to take the WIM and lay it down in a directory on a server. That seems to produce all the necessary "stuff" I need in that directory. How would I take that content and produce a "raw disk image file"? I'm told the definition of "raw disk image file" is a block-by-block copy of the disk image, which I hope is the output of the "imagex.exe /apply" command I use currently, but stored in a single file I can hand back to another system in our solution. imagex.exe /apply image.wim 1 R:\WimImagePoint I would like to take the contents of R:\WimImagePoint and produce the elusive (to me) "raw disk image file". ISO is not what they want, nor is anything requiring winPE. Any pointers? External utilities' references are welcome. Would like to avoid unmanaged code solutions as much as possible, but will entertain them if that's the only route. Also, I am not married to the idea of imagex /apply as the starting point, it's just the comfort zone so far.

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Long 'pause' after copying large files on windows 2008

    - by Ian
    I have a mystery regarding pauses after file copies on windows server 2088 (and other releases) When copying large files, like vhds, to locally attached USB disks I often see a long pause after the copy has completed 100%. As an example: robocopying vhd files. The bytes read/written count matches the vhd file size and robocopy shows 100% but it pauses for several minutes. If I do nothing it will continue, but I will have to wait for quite some time - about the same amount of time as it took to get to 100%. The bytes read/bytes written counters for robocopy do not change. My first thought was that the AV had to scan it, but I'm looking at a machine right now which doesn't have an AV installed and this is occurring, so impossible. No other processes are showing read/write byte counts as going up. The behavior is the same if I use the copy command or xcopy. I've seen this on other systems but have never worked out what the cause is. Anyone got any suggestions as to what might be going on?

    Read the article

  • Can Linux file permissions be fooled?

    - by puk
    I came across this example today and I wondered how reliable Linux file permissions are for hiding information $ mkdir fooledYa $ mkdir fooledYa/ohReally $ chmod 0300 fooledYa/ $ cd fooledYa/ $ ls >>> ls: cannot open directory .: Permission denied $ cd ohReally $ ls -ld . >>> drwxrwxr-x 2 user user 4096 2012-05-30 17:42 . Now I am not a Linux OS expert, so I have no doubt that someone out there will explain to me that this is perfectly logical from the OS's point of view. However, my question still stands, is it possible to fool, not hack, the OS into letting you view files/inode info which you are not supposed to? What if I had issued the command chmod 0000 fooledYa, could an experienced programmer find some round about way to read a file such as fooledYa/ohReally/foo.txt?

    Read the article

  • How to assign more then one open action to one file system

    - by Martin
    All operating system I use apart form Windows have a “Open with…” options for there Explorer, Finder, whatever. This is very useful as often more then one program can handle a given file extension. With the exception on zip file I generally have not seen such a function on Window. However since there is an exceptions it is possible. The questions I have is: How can a “Open with…” can be archived with windows? Is there perhaps a tool which can do it?

    Read the article

  • IPv6 local address in hosts file

    - by Dan
    I have set up a local domain on my Apache server. Then I added the following line in my /etc/hosts file ::1 exampledomain.local After I trying to navigate to it, (I tried Firefox and Chromium) I got a server not found error. Then I tried ping6 and it worked: dan@danny:~$ ping6 exampledomain.local PING exampledomain.local(exampledomain.local) 56 data bytes 64 bytes from exampledomain.local: icmp_seq=1 ttl=64 time=0.032 ms If I replace ::1 with 127.0.0.1 in my hosts file, it works fine. I'm not sure if this is relevant but this is my Virtual Host configuration in Apache2: <VirtualHost *:80> ServerAlias exampledomain.local DocumentRoot /home/dan/sites/exampledomain <Directory /home/dan/sites/exampledomain> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/exampledomain-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog ${APACHE_LOG_DIR}/exampledomain-access.log combined </VirtualHost> My question is: How can I make it work with the IPv6 address?

    Read the article

  • tag based file organizer

    - by Richie
    I'm finishing professional school and over the years have acquired a pile of notes and articles that I'd like to hang onto. I'd like to add to them and create sort of an archive of article and files that may be useful down the road. I'd also like to organize this collection of files not only by simple grouping but also with tags. I feel like that will make searching through them years later much easier. Suggestions on software that would be good for this? Just a general file manager, something that uses tags that can be attached to each file? Thanks

    Read the article

  • Run a batch file silently, executed at remote desktop login

    - by ILMV
    In our office we are using Linux thin client machines, they work very well except the lack of IE, which is a pain because the corporations we deal with are too stupid to update their web apps (no flame wars please). To solve this problem we have machine in our computer room which users remote desktop into to access internet explorer, this is achieved by running a batch script which opens IE and when it closes logs them off, this setup works well for us. Even though I have @echo off and the cmd window isn't displaying anything, I would really like that batch file to be executed silently, so the cmd window doesn't appear at all. Is this possible? The Ubuntu terminal server client has an option to launch a file / app at login, is there a command I can use to run this batch silently. I have tried these: C:\my_batch.bat /NOCONSOLE C:\my_batch.bat /NOWINDOW C:\my_batch.bat /B C:\my_batch.bat /Q ...with no success, perhaps it's the way I am doing it? Cheers :-) Edit The remote desktop platform is a Windows XP machine, nothing entirely special but not a Windows Server setup.

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Should you disable page file with SSD?

    - by Pyrolistical
    I've been reading http://serverfault.com/questions/23621/any-benefit-or-detriment-from-removing-a-pagefile-on-an-8gb-ram-machine, and it has a lot of great information. But assuming you have more than enough ram, I think page file should be disabled on SSD to extend the life time. I know you would lose the core dump on crash, but not many people need that information. From my understand without a page file as you reach the limit of your ram that might trigger thrashing on disk. But for SSDs there is no concept of thrashing, reads are fast. What do you guys think?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >