Search Results

Search found 85647 results on 3426 pages for 'file write'.

Page 25/3426 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

  • Expanding iSCSI LUNs (NTFS)

    - by Fatih
    I have a 4TB iSCSI LUN that formated as NTFS in Windows 2008. I've shared this formated volume as a folder over SMB. When the capacity of this volume is not enough, I have to add more iSCSI LUNs, but the end-users must see only the folder that I've shared before. So, when I expand the NTFS volume that is currently 4TB, with more iSCSI LUNS(for example 2 more 4TB LUN), if one of the luns is failed, or missing, will all of my data in the folder be lost? I imagine that the expanding ntfs volume is like RAID 0(striped). if it is like RAID 0, then all my data will be lost when one of the luns is failed, or missing. In brief, there are two questions in here: 1- What will be happened, if one of the luns is missing in an expanded ntfs volume? 2- Is there another way to merge all of iscsi luns as only a folder, in that way the users don't see any extra folder even if I add extra iscsi luns to the file server.(I don't mention about DFS) Regards.

    Read the article

  • nginx: rewrite a non-existent php-file to another php-file with all arguments

    - by at0m33
    i really need help here. Sitting for some time now and dont figured it out. I want to realize a very simple task - rewrite a non-existent php file to another existant php file with all arguments like: this http://example.com/nonexistent.php?url=google.com to -> http://example.com/existent.php?url=google.com I tried something like this: rewrite ^/nonexistent.php /existent.php; Which dont works (File not found). But redirect a non-existent html file to a php file like this: rewrite ^/nonexistent.html /existent.php; works. I dont want to rewrite a html file, but this is still a confusing behaviour. Therefore it tried also something like this (and some variations): rewrite ^/nonexistent.php?url=^(.*)$ /existent.php?url=$1; which is also not working. (Maybe the syntax is bad) Any help here? It would be very nice!

    Read the article

  • How to create a text file from column and FTP that text file to server

    - by addi
    I have workbook with 2 sheets (Sheet1 and sheet2). On sheet1 user will enter the data which will be populated in the column B and then column C will hold the values from Col A and B on sheet2. I need to create a text file from the values in coloumn C on a click of a button and then upload(FTP) that file to a server. So the sheet1 will have 2 buttons. Button1 will save the excel file and create the text file in windows temp directory. e.g text.xls text.prop (text file whoch has all the values in column C on sheet2) Button2 will upload (FTP) the text file (.prop) to a server. Can anyone please send me the steps and VB code to achieve the above tasks? Thanks in Advance Addi

    Read the article

  • iPhone file corruption

    - by sfider
    Is it possible (on iPhone/iPod Touch) for a file written like this: if (FILE* file = fopen(filename, "wb")) { fwrite(buf, buf_size, 1, file); fclose(file); } to get corrupted, e.g. when app is forced to terminate? From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.

    Read the article

  • Is O_NONBLOCK being set a property of the file descriptor or underlying file?

    - by Daniel Trebbien
    From what I have been reading on The Open Group website on fcntl, open, read, and write, I get the impression that whether O_NONBLOCK is set on a file descriptor, and hence whether non-blocking I/O is used with the descriptor, should be a property of that file descriptor rather than the underlying file. Being a property of the file descriptor means, for example, that if I duplicate a file descriptor or open another descriptor to the same file, then I can use blocking I/O with one and non-blocking I/O with the other. Experimenting with a FIFO, however, it appears that it is not possible to have a blocking I/O descriptor and non-blocking I/O descriptor to the FIFO simultaneously (so whether O_NONBLOCK is set is a property of the underlying file [the FIFO]): #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char **argv) { int fds[2]; if (pipe(fds) == -1) { fprintf(stderr, "`pipe` failed.\n"); return EXIT_FAILURE; } int fd0_dup = dup(fds[0]); if (fd0_dup <= STDERR_FILENO) { fprintf(stderr, "Failed to duplicate the read end\n"); return EXIT_FAILURE; } if (fds[0] == fd0_dup) { fprintf(stderr, "`fds[0]` should not equal `fd0_dup`.\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should not have `O_NONBLOCK` set.\n"); return EXIT_FAILURE; } if (fcntl(fd0_dup, F_SETFL, fcntl(fd0_dup, F_GETFL) | O_NONBLOCK) == -1) { fprintf(stderr, "Failed to set `O_NONBLOCK` on `fd0_dup`\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should still have `O_NONBLOCK` unset.\n"); return EXIT_FAILURE; // RETURNS HERE } char buf[1]; if (read(fd0_dup, buf, 1) != -1) { fprintf(stderr, "Expected `read` on `fd0_dup` to fail immediately\n"); return EXIT_FAILURE; } else if (errno != EAGAIN) { fprintf(stderr, "Expected `errno` to be `EAGAIN`\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } This leaves me thinking: is it ever possible to have a non-blocking I/O descriptor and blocking I/O descriptor to the same file and if so, does it depend on the type of file (regular file, FIFO, block special file, character special file, socket, etc.)?

    Read the article

  • File system query

    - by Balaji
    Is there an easy way to query data in file system? We are storing data in File system (instead of database) Is there a way to query the content of the file system? The data in the file system is stored in xml format. since the data is growing day by day we are finding it difficult to query the content of the files in the file system. Can anyone suggest what could be the tool/method to query the data in the existing file system?

    Read the article

  • Disabling file permissions on NFS share

    - by user41377
    I am trying to set up a nfs share for use by non-technical users, and really don't want file permissions messing things up. Everyone should be able to read/write anything on this share without even knowing what file permissions are, much less adjusting them. Is there a way I can just make the nfs server write everything as 770 or something? It seems like this should be easy, but the best I can come up with is a cronjob to periodically set them to that, which is far from ideal. The server is a Netgear ReadyNAS. Ideally I would like to stick to doing stuff from within the web interface on it, but I can just root it if needed.

    Read the article

  • Creating .ics file from code sometimes givs "The file <filename>[<index>].ics" is not a valid intern

    - by KaJo
    Hi! I am building an ASP site where i use response.write to create an ics file where user can choose open or save dialog for the event. When the user choose open, outlook sometimes gives the error "The file [].ics" is not a valid internet Calender file"? It is always the same event that is written to the response. The code looks like this: this.Response.ContentType = "text/calendar"; this.Response.AddHeader("Cache-Control", "no-cache, must-revalidate"); this.Response.AddHeader("Pragma", "no-cache"); this.Response.Expires = -1; this.Response.AddHeader("Content-disposition", string.Format("attachment; filename={0}.ics",filename)); this.Response.Write("BEGIN:VCALENDAR"); this.Response.Write("\nVERSION:2.0"); this.Response.Write("\nMETHOD:PUBLISH"); this.Response.Write("\nBEGIN:VEVENT"); this.Response.Write("\nType:Single Meeting"); this.Response.Write("\nORGANIZER:MAILTO:" + organizer); this.Response.Write("\nDTSTART:" + startDate.ToUniversalTime().ToString(DateFormat)); this.Response.Write("\nDTEND:" + endDate.ToUniversalTime().ToString(DateFormat)); this.Response.Write("\nLOCATION:" + location); this.Response.Write("\nUID:" + Guid.NewGuid().ToString()); this.Response.Write("\nDTSTAMP:" + DateTime.Now.ToUniversalTime().ToString(DateFormat)); this.Response.Write("\nSUMMARY:" + summary); this.Response.Write("\nDESCRIPTION:" + description); this.Response.Write("\nPRIORITY:5"); this.Response.Write("\nCLASS:PUBLIC"); this.Response.Write("\nEND:VEVENT"); this.Response.Write("\nEND:VCALENDAR"); this.Response.End(); I usually get the error the first time I try to open the event in outlook, and the it works the second time? Does anyone knows how to solve this problem?

    Read the article

  • including php file from another server with php

    - by ermac2014
    hi I have 2 php files each file on different server. lets say the first file called includeThis.php and the second called main.php the first file is located in (http://)www.sample.com/includeThis.php and the second located in (http://)www.mysite.com/main.php so now what I want is to include the first file into my second file. the contents of the first file is like: <?php $foo = "this is data from file one"; ?> the second file like: <?php include "http://www.sample.com/includeThis.php"; echo $foo; ?> is there any way I can do this? thanks in advance

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • Saving file to phone in stead of SD-card

    - by Galip
    Hi guys, In my app I save an XML file to the users SD-card by doing File newxmlfile = new File(Environment.getExternalStorageDirectory() + "/Message.xml"); But not all users have SD-cards in their phone and therefore my app is likely to crash. How must I change my File creating method in order to save the file to the phone's memory instead of the SD-card? Also, how must I change the loading of the file? (currently: new InputSource(new FileInputStream(Environment.getExternalStorageDirectory() + "/Message.xml")))

    Read the article

  • Intercept windows open file

    - by HyLian
    Hello, I'm trying to make a small program that could intercept the open process of a file. The purpose is when an user double-click on a file in a given folder, windows would inform to the software, then it process that petition and return windows the data of the file. Maybe there would be another solution like monitoring Open messages and force Windows to wait while the program prepare the contents of the file. One application of this concept, could be to manage desencryption of a file in a transparent way to the user. In this context, the encrypted file would be on the disk and when the user open it ( with double-click on it or with some application such as notepad ), the background process would intercept that open event, desencrypt the file and give the contents of that file to the asking application. It's a little bit strange concept, it could be like "Man In The Middle" network concept, but with files instead of network packets. Thanks for reading.

    Read the article

  • DatabaseName.bak File Transfer Problem

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm Later when I tried to restore file on server it was restored correctly, but when I tried to transfer same file using FTP Software FileZila and tried to restore that downloaded file it was giving above error. That is file is getting corrupted on the way to tranfer using FTP. Any Idea how can i avoid that file to get courrupt? Note: File is downloaded using FileZila - TransferType as Binary. Thank you.

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • show differences between file and file in (compressed) tar archive

    - by Kyss Tao
    Say I have unpacked a gz-compressed tar file, and do not remember what changes I made to the unpacked files, or I archived a folder a while ago and want to know what has changed to the files since. I can use tar -zd to get an overview. Then, say it shows me file foo has changed. How can I see the changes in this file, i.e. the difference between the file on my file system and the (older) file in the archive (ideally in vimdiff, but diff output would be fine too)?

    Read the article

  • cat contents of one file into another file

    - by Attila O.
    I have a large (binary) file that has some corruption near the beginning. Then, I have a second, smaller file that I obtain by starting to download the same file again, but interrupt after I have enough bytes to fix the original one. My question is, how do I simply overwrite the beginning of the large file with the contents of the second, smaller file? I could use cat, tail and head, but that would create a copy of the file. There must be a more efficient way. Oh yes, and I'm looking for a linux command-line solution, if that wasn't obvious. I'm using bash, but I have other shells if that helps.

    Read the article

  • Read random lines from huge CSV file in Python

    - by jbssm
    I have this quite big CSV file (15 Gb) and I need to read about 1 million random lines from it. As far as I can see - and implement - the CSV utility in Python only allows to iterate sequentially in the file. It's very memory consuming to read the all file into memory to use some random choosing and it's very time consuming to go trough all the file and discard some values and choose others, so, is there anyway to choose some random line from the CSV file and read only that line? I tried without success: import csv with open('linear_e_LAN2A_F_0_435keV.csv') as file: reader = csv.reader(file) print reader[someRandomInteger] A sample of the CSV file: 331.093,329.735 251.188,249.994 374.468,373.782 295.643,295.159 83.9058,0 380.709,116.221 352.238,351.891 183.809,182.615 257.277,201.302 61.4598,40.7106

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >