Search Results

Search found 75840 results on 3034 pages for 'file servers'.

Page 118/3034 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Which encoding (code page) is used for file names in ZIP archive under Mac OS x 10.6

    - by bao
    I have a zip library SharpZipLib which intended to work with ZIP archives using C#. It has parameter ICSharpCode.SharpZipLib.Zip.ZipConstants.DefaultCodePage which specifies encoding of file names in zip archive. I know that in Windows and in OSX different encodings are used to store file names. 1) Which encodings (code pages) are used in both? 2) How to determine programmatically which encoding is used? When I open in Win7 zip file packed under MacOS X, I see files with bad names (originally - cyrillic) and folder called __MACOSX, so I can say zip was prepared on Mac box. Any other way? What about other UNIX like systems?

    Read the article

  • Store data in file system rather than SQL or Oracle database.

    - by nunu
    Hi All, As I am working on Employee Management system, I have two table (for example) in database as given below. EmployeeMaster (DB table structure) EmployeeID (PK) | EmployeeName | City MonthMaster (DB table structure) Month | Year | EmployeeID (FK) | PrenentDays | BasicSalary Now my question is, I want to store data in file system rather than storing data in SQL or ORACLE. I want my data in file system storage for Insert, Edit and Delete opration with keeping relation with objects too. I am a C# developer, Could anybody have thoughts or idea on it. (To store data in file system with keeping relations between them) Thanks in advance. Any ideas on it?

    Read the article

  • new >> how would i read a file that has 3 columns and each column contains 100 numbers into an array

    - by user320950
    int exam1[100];// array that can hold 100 numbers for 1st column int exam2[100];// array that can hold 100 numbers for 2nd column int exam3[100];// array that can hold 100 numbers for 3rd column void main() { ifstream infile; int num; infile.open("example.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++); // array numbers less than 100 { while(infile >> [exam]); // while reading get 1st array or element ???// how will i go read the next number infile >> num; } } infile.close(); }

    Read the article

  • Is PNG the most economically sound file format to store pictures in?

    - by raoulsson
    I am looking for an economically sound solution to store pictures long time. I read about the PNG file format that it has superior characteristics compared to JPEG, namely in these categories: no patents, no licenses, no royalities no quality loss yet compressed I have a lot of big ESP's from PhotoShop that contain tons of metadata, like layers and color profiles that I don't need to store (those were handy for the designer, when he worked with it). I want to convert these images without that hidden data, to a new target file format. Another side condition to my question is that the target file format has to be displayable in the browser. So I guess my options are limited anyway: GIF, JPEG, PNG. Am I missing something or is PNG the best fit for my case?

    Read the article

  • Which of FILE* or ifstream has better memory usage?

    - by Viet
    I need to read fixed number of bytes from files, whose sizes are around 50MB. To be more precise, read a frame from YUV 4:2:0 CIF/QCIF files (~25KB to ~100KB per frame). Not very huge number but I don't want whole file to be in the memory. I'm using C++, in such a case, which of FILE* or ifstream has better (less/minimal) memory usage? Please kindly advise. Thanks! EDIT: I read fixed number of bytes: 25KB or 100KB (depending on QCIF/CIF format). The reading is in binary mode and forward-only. No seeking needed. No writing needed, only reading. EDIT: If identifying better of them is hard, which one does not require loading the whole file into memory?

    Read the article

  • How to run a set of SQL queries from a file, in PHP?

    - by Harish Kurup
    I have some set of SQL queries which is in a file(i.e query.sql), and i want to run those queries in files using PHP, the code that i have wrote is not working, //database config's... $file_name="query.sql"; $query==file($file_name); $array_length=count($query); for($i=0;$i<$array_length;$i++) { $data .= $query[$i]; } echo $data; mysql_query($data); it echos the SQL Query from the file but throws an error at mysql_query() function...

    Read the article

  • How to get size of file in visual c++?

    - by karikari
    Below is my code. My problem is, my destination file always has a lot more strings than the originating file. Then, inside the for loop, instead of using i < sizeof more, I realized that I should use i < sizeof file2 . Now my problem is, how to get the size of file2? int i = 0; FILE *file2 = fopen(LOG_FILE_NAME,"r"); wfstream file3 (myfile, ios_base::out); // char more[1024]; char more[SIZE-OF-file2]; for(i = 0; i < SIZE-OF-file2 ; i++) { fgets(more, SIZE-OF-file2, file2); file3 << more; } fclose(file2); file3.close();

    Read the article

  • Which file types are worth compressing (zipping) for remote storage? For which of them the compresse

    - by user193655
    I am storing documents in sql server in varbinary(max) fileds, I use filestream optionally when a user has: (DB_Size + Docs_Size) ~> 0.8 * ExpressEdition_Max_DB_Size I am currently zipping all the files, anyway this is done because the Document Read/Write work was developed 10 years ago where Storage was more expensive than now. Many files when zipped are almost as big as the original (a zipped pdf is about 95% of original size). And anyway unzipping has some overhead, that becomes twice when I need also to "Check-in"/Update the file because I need to zip it. So I was thinking of giving to the users the option to choose whether the file type will be zipped or not by providing some meaningful default values. For my experience I would impose the following rules: 1) zip by default: txt, bmp, rtf 2) do not zip by default: jpg, jpeg, Microsoft Office files, Open Office files, png, tif, tiff Could you suggest other file types chosen among the most common or comment on the ones I listed here?

    Read the article

  • How can I locally debug file permission issues in Visual Studio?

    - by robertc
    I want to debug an ASP.Net website as it attempts to write a file to a directory. When actually deployed this file would possibly not be writeable by the worker process so an error would be thrown, this is not a problem as I just want to catch the error, inform the user and move on. Of course, if I'm debugging on my local machine then I'm an administrator and I have permission to write to the file, so I can't check that I've trapped the correct errors and I can't step through an see where it goes wrong if I haven't. Is there a standard approach to this sort of thing?

    Read the article

  • [C] Read line from file without knowing the line length.

    - by ryyst
    Hi, I want to read in a file line by line, without knowing the line length before. Here's what I got so far: int ch = getc(file); int length = 0; char buffer[4095]; while (ch != '\n' && ch != EOF) { ch = getc(file); buffer[length] = ch; length++; } printf("Line length: %d characters.", length); I can now figure out the line length, but only for lines that are shorter than 4095 characters. Is there a better way to do this (I already used fgets() but got told it wasn't the best way)? --Ry

    Read the article

  • How to open a text file that's not in the same folder?

    - by nunos
    Since C it's not a language I am used to program with, I don't know how to do this. I have a project folder where I have all the .c and .h files and a conf folder under which there is a config.txt file to read. How can I open that? FILE* fp = fopen("/conf/config.txt"); if (fp != NULL) { //do stuff } else printf("couldn't open file\n"); I keep getting the error message. Why? Thanks.

    Read the article

  • Parallel WCF calls to multiple servers

    - by gregmac
    I have a WCF service (the same one) running on multiple servers, and I'd like to call all instances in parallel from a single client. I'm using ChannelFactory and the interface (contract) to call the service. Each service has a local <endpoint> client defined in the .config file. What I'm trying to do is build some kind of generic framework to avoid code duplication. For example a synchronous call in a single thread looks something like this: Dim remoteName As String = "endpointName1" Dim svcProxy As ChannelFactory(Of IMyService) = New ChannelFactory(Of IMyService)(remoteName) Try svcProxy.Open() Dim svc As IMyService = svcProxy.CreateChannel() nodeResult = svc.TestRemote("foo") Finally svcProxy.Close() End Try The part I'm having difficulty with is how to specify and actually invoke the actual remote method (eg "TestRemote") without having to duplicate the above code, and all the thread-related stuff that invokes that, for each method. In the end, I'd like to be able to write code along the lines of (consider this psuedo code): Dim results as Dictionary(Of Node, ExpectedReturnType) results = ParallelInvoke(IMyService.SomeMethod, parameter1, parameter2) where ParallelInvoke() will take the method as an argument, as well as the parameters (paramArray or object() .. whatever) and then go run the request on each remote node, block until they all return an answer or timeout, and then return the results into a Dictionary with the key as the node, and the value as whatever value it returned. I can then (depending on the method) pick out the single value I need, or aggregate all the values from each server together, etc. I'm pretty sure I can do this using reflection and InvokeMember(), but that requires passing the method as a string (which can lead to errors like calling a non-existing method that can't be caught at compile time), so I'd like to see if there is a cleaner way to do this. Thanks

    Read the article

  • PHP PCRE differences on testing and hosting servers

    - by Gary Pearman
    Hi all, I've got the following regular expression that works fine on my testing server, but just returns an empty string on my hosted server. $text = preg_replace('~[^\\pL\d]+~u', $use, $text); Now I'm pretty sure this comes down to the hosting server version of PCRE not being compiled with Unicode property support enabled. The differences in the two versions are as follows: My server: PCRE version 7.8 2008-09-05 Compiled with UTF-8 support Unicode properties support Newline sequence is LF \R matches all Unicode newlines Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Default recursion depth limit = 10000000 Match recursion uses stack Hosting server: PCRE version 4.5 01-December-2003 Compiled with UTF-8 support Newline character is LF Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Match recursion uses stack Also note that the version on the hosting server (the same version PHP is compiled against) is pretty old. What confuses me though, is that pcretest fails on both servers from the command line with re> ~[^\\pL\d]+~u ** Unknown option 'u' although this regexp works fine when run from PHP on my server. So, I guess my questions are does the regular expression fail on the hosting server because of the lack of Unicode properties? Or is there something else that I'm missing? Thanks all, Gaz.

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Web application architecture, and application servers?

    - by seanieb
    Hi, I'm building a web application, and I need to use an architecture that allows me to run it on two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. As you can see I'm fairly clueless. I'm familiar with using Php, MySQL and the basics of Python, but bringing this all together using something more complex than a cron job is new to me. How do go about implementing this? What frame work(s) should I use? Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) Is Cakephp a good solution? If so will I be able to control and monitor the Python code using it?

    Read the article

  • Get Active Directory Attributes for Users on Legacy Exchange Servers

    - by Jason Hindson
    I would like to create a CSV file of the users on our Exchange 2003 servers, and include some attributes from their AD account. In particular, I would like to pull certain AD values for the users with RecipientTypeDetails = LegacyMailbox. I have tried a few different methods for targeting and filtering (ldapfilter, filter, objectAttribute, etc.) these users, with little success. The Exchange 2003 PowerPack for PowerGUI was helpful, but permissions issues and using the Exchange_Mailbox class are not challenges I want to overcome. I was finally able to create a working script, but it is very slow. The script I've created below is currently working, although it is on track to take about 4+ hours to complete. I'm am looking for suggestions for improving the efficiency of my script or otherwise obtaining this data in a quicker manner. Here is the script: $ADproperties = 'City','Company','department','Description','DistinguishedName','DisplayName','FirstName','l','LastName','msExchHomeServerName','NTAccountName','ParentContainer','physicaldeliveryofficename','SamAccountName','useraccountcontrol','UserPrincipalName' get-user -ResultSize Unlimited -ignoredefaultscope -RecipientTypeDetails LegacyMailbox | foreach {Get-QADUser $_.name -DontUseDefaultIncludedProperties -IncludedProperties $ADproperties} | select $ADproperties | epcsv C:\UserListBuilder\exchUsers.csv -notype Any help you can provide will be greatly appreciated!

    Read the article

  • Hosting images from unsecured servers (travelnow.com)

    - by i.am.not.aids
    Hi, My application needs to serve images hosted in travelnow.com (ie. this image) but the application only allow images hosted on a secured server (ie. https). What are my options? TravelNow's suggestion is as follows. How do I do this? Akamai image servers are not secure. Therefore you are unable to serve any of the image urls with a secure HTTPS URL. If you need to serve an image with HTTPS, you must temporarily save the image to your own secure server. This is suggested only for images to be saved as you use them or need them temporarily on the secure page. The hotel images file available from the Affiliate Center provides up to 1.5 million URLs at any time for all properties storing images in the Akamai system. It is not recommended or advised to store all files in advance on your own system since properties change and update images frequently. Although we are not responsible for the images each property stores on the Akamai system, YOU will be responsible for any customer issues arising from displaying outdated or saved image files on your own pages. Thanks! Adrian

    Read the article

  • drupal themes: .info file: how do I add more than 1 css file / js file to my theme?

    - by egarcia
    I'm creating a new Drupal theme. Until now, I only needed to include a single css file and a single js file. So my theme.info file had something like this: stylesheets[all][] = css/style.css scripts[] = js/script.js Now I must include jquery and jquery-ui in order to use a calendar date. These come with 2 new javascript files, and 1 additonal css file that I must add to the site. The calendar input form is going to be used in all pages (on a side block) so it is ok for me to load the extra css/javascript on all pages. I think the easiest thing would be to reference them on the .info file itself. At first I tried to just put them there with separate spaces: stylesheets[all][] = css/style.css css/ui-lightness/jquery-ui-1.8.1.custom.css scripts[] = js/jquery-1.4.2.min.js js/jquery-ui-1.8.1.custom.min.js js/reservations.js I emptied drupal's cache and... none of them loaded. I then tried separating each file with a comma, and flushing the cache again. Same result. I've browsed some drupal pages, but could not find how to add several javascript/css files on one theme (they always seem to add just 1 of each). So, how do I include several css/javascript files on the .info file?

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • PHP: parse $_FILES[] data in multidimesional array

    - by superUntitled
    Example form here: http://jsfiddle.net/superuntitled/uaTtx/1/ I have a form that allows for dynamic duplication of the form fields. The form allows for file uploads and text input, so the data is sent in both $_POST and $_FILES arrays. The the initial set of inputs look like this: <input type="text" name="question[1][text]" /> <input type="file" name="question[1][file]" /> <input type="text" class="a" name="answer[1][text][]" /> <input type="file" name="answer[1][file][]" /> When duplicated the fields are incremented, they look like this: <input type="text" name="question[2][text]" /> <input type="file" name="question[2][file]" /> <input type="text" class="a" name="answer[2][text][]" /> <input type="file" name="answer[2][file][]" /> To complicate matters, the "answer" form fields can also be duplicated (thus the [] at the end of the 'answer' name array. How can I parse the posted $_FILES array? I have tried something like this: foreach ($_FILES['question'] as $p_num) { echo $p_num['file']['name']; foreach ($_FILES['answer'] as $a_num) { echo $a_num['file']['name']; } } but I get an "Undefined index: file... " error. How can I parse out the posted values.

    Read the article

  • Reading and writing to files simultaneously?

    - by vipersnake005
    Moved the question here. Suppose, I want to store 1,000,000,000 integers and cannot use my memory. I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Windows 7: Can't see ISO file in C:\

    - by cbp
    I used DVD shrink to create an ISO file and saved it into C:\ The ISO file is visible with some programs but not with others. The file is not hidden as far as I am aware. But it cannot be seen by Windows Explorer, DVD Decrypter or a bunch of other programs. If I search for the file using Windows 7's Start Menu search tool, I can see the file and I can right click and select Properties. The Properties window appears OK, but if I try to change tabs on the property window, I receive an error message as though the file is not there. DVD Shrink can still open the file OK. I can also find the file using Agent Ransack (a file searching tool), but then I cannot open it. What gives?

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >