Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 66/1833 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Process keeps creating dump files

    - by Pieter
    We have a Delphi application running on a terminal server that keeps generating dump files. For the same PID, it keeps creating dump files with an interval of around 1 second until the process is killed manually. Another weird thing is the name of the dump files: ±_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp ÷_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_42.dmp k_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_39.dmp Ô_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_41.dmp Ž_minidump_default_pid_7916_tid_x6590_2012_6_18_13_48_40.dmp The dump files aren't telling us much and we would like to have a suggestion where we should start looking.

    Read the article

  • How can I transfer metadata from several flac files to aac (m4a) files?

    - by abckookooman
    Suppose I have two folders, dir1 and dir2, with deveral files in each of them, and all the files in dir1 are named like "ExampleFileName.flac" and all the files in dir2 are named as "ExampleFileName.m4a" - basically their names are the same except the extension. What I need to do is transfer all of the metadata for each of the files somehow - even though their codecs are different. It would be great if I can do this via command line, but anything is appreciated. Thank you.

    Read the article

  • Rescuing files and commits from "no branch" in git

    - by Xeoncross
    I started working on some files I had in a git submodule under another project. However, since it was a git submodule it never checked out "master" and instead just checked out the head and placed all the files in the folder in "no branch". Now that I've made some changes by accident to these files I just realized that I was working in a "no branch", submodule of my project. How do I get those files into a branch (like master) so I can rescue them?

    Read the article

  • Does Windows 7 deleted files generated during hibernation?

    - by Koffeehaus
    Somebody was using my Windows 7 and she hibernated it instead of shutting down. Later, I booted up Ubuntu and moved about 2GB worth of files from the Ubuntu partition to the Windows partition. After booting up Windows (from hibernation), I couldn't find any of the files. Then I restarted the PC, and the files showed for a second or two and then disappeared. Did Windows delete all the files I put on it while it was hibernating?

    Read the article

  • Robocopy. Delete files from source

    - by kurresmack
    Hey, We copied all our files to a new storage server recently. We didn't want to move at the time becuase we werent sure if files would get lost. The problem now is that we have files on both places! How can we move only the files that does not exists in the target and for those that exists in both places we delete it from the source? it is windows server 2008 Thanks

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Unable to browse some pdfs and docs.

    - by JamesEggers
    I have a web site that uses Microsoft Indexing Service to index and query a directory that holds various documents of type pdf, rtf, mht, and doc. The indexing and querying works well (for the most part); however, some files will load while others will not. This is a Windows Server 2003 box running the site using IIS 6. The indexed directory is a sub directory off of the site's root directory (i.e. http://my.domain.com/files/). The file paths are accurate in the URL; however, I can only access some of the files of each file type. The files that I cannot access give a 404 File Not Found. I am able to open all files via windows explorer;however, attempting to open them via a browser over http is hit and miss. Has anyone experienced this issue and know how to resolve it? Anyone have any idea why I could access some files but not others? Does anyone have any recommendations on what to look into to try this (i.e. does owner matter or something like that?)? EDIT: Here is the Request and Response Headers for a bad file: GET /files/file1.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:38:54 GMT [typical 404 page markup excluded] Here is the Request/Response headers for the good file: GET /files/file2.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 200 OK Content-Length: 352464 Content-Type: application/pdf Last-Modified: Tue, 13 Jan 2009 15:27:35 GMT Accept-Ranges: bytes ETag: "74ccc5759375c91:2a47" Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:50:33 GMT

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Fastest way to write large STL vector to file using STL

    - by ljubak
    I have a large vector (10^9 elements) of chars, and I was wondering what is the fastest way to write such vector to a file. So far I've been using next code: vector<char> vs; // ... Fill vector with data ofstream outfile("nanocube.txt", ios::out | ios::binary); ostream_iterator<char> oi(outfile, '\0'); copy(vs.begin(), vs.end(), oi); For this code it takes approximately two minutes to write all data to file. The actual question is: "Can I make it faster using STL and how"?

    Read the article

  • nServiceBus with large XML messages

    - by Sean
    Hello, I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload? Thank you in advance.

    Read the article

  • How to geocode a large number of addresses?

    - by user308569
    I need to geocode, i.e. translate street address to latitude,longitude for ~8,000 street addresses. I am using both Yahoo and Google geocoding engines at http://www.gpsvisualizer.com/geocoder/, and found out that for a large number of addresses those engines (one of them or both) either could not perform geocoding (i.e.return latitude=0,longitude=0), or return the wrong coordinates (incl. cases when Yahoo and Google give different results). What is the best way to handle this problem? Which engine is (usually) more accurate? I would appreciate any thoughts, suggestions, ideas from people who had previous experience with this kind of task.

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • Large svn external

    - by MPelletier
    I have a project which uses a large library residing in its own repository. Using: Tortoise-SVN, the server is running an enterprise edition of VisualSVN The project itself has the "standard" structure: trunk tags branches In each branch, tag, and trunk is the library, set as an external (svn:external property). If I get the entire tree, I get the library several times, which is just getting too ridiculously repetitive. Is there a recommended structure for this? Or perhaps a way not to get all externals (because other externals are much smaller, easier to manipulate)?

    Read the article

  • Large XML files in dataset (outofmemory)

    - by dklein
    Hi folks, I am currently trying to load a slightly large xml file into a dataset. The xml file is about 700 MB and every time I try to read the xml it needs plenty of time and after a while it throws an "out of memory" exception. DataSet ds = new DataSet(); ds.ReadXml(pathtofile); The main problem is, that it is necessary for me to use those datasets (I use it to import the data from xml file into a sybase database (foreach table, foreach row, foreach column)) and that I have no scheme file. I already googled a while, but I did only find solutions that won't be usable for me.

    Read the article

  • How do I export a large table into 50 smaller csv files of 100,000 records each

    - by Eddie
    I am trying to export one field from a very large table - containing 5,000,000 records, for example - into a csv list - but not all together, rather, 100,000 records into each .csv file created - without duplication. How can I do this, please? I tried SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /tmp/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 that gives the first 100000 records, but nothing I do has the other 4,900,000 records exported into 49 other files - and how do I specify the other 49 filenames? for example, I tried the following, but the SQL syntax is wrong: SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /home/user/Eddie/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 INTO OUTFILE /home/user/Eddie/name_of_export_file_for_second_100000_records.csv LINES TERMINATED BY '\n' LIMIT 100001 , 200000 and that did not create the second file... what am I doing wrong, please, and is there a better way to do this? Should the LIMIT 0 , 100000 be put Before the first INTO OUTFILE statement, and then repeat the entire command from SELECT for the second 100,000 records, etc? Thanks for any help. Eddie

    Read the article

  • *Client* scalability for large numbers of remote web service calls

    - by Yuriy
    Hey Guys, I was wondering if you could share best practices and common mistakes when it comes to making large numbers of time-sensitive web service calls. In my case, I have a SOAP and an XML-RPC based web service to which I'm constantly making calls. I predict that this will soon become an issue as the number of calls per second will grow. On a higher level, I was thinking of batching those calls and submitting those to the web services every 100 ms. Could you share what else works? On a lower level side of the things, I use Apache Xml-Rpc client and standard javax.xml.soap.* packages for my client implementations. Are you aware of any client scalability related tricks/tips/warnings with these packages? Thanks in advance Yuriy

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • System.Overflow Exception - int32 is too large or small

    - by LonnieBest
    I need a little advice. I've got windows service that runs at night. In my development environment, it runs without exception, but when I running it "installed on other machines", when I come in the morning, I'm welcomed with a System.Overflow exception that says that I've set an int32 to value that is too large or small. I've carefully combed the service's c# code, and I have try/catch statements around everything, that should catch any error and write it to a log without completely stopping my service with this overflow exception. But still, it occurs and stops the service. I'd appreciate any conceptual advice on how to pin point what's causing an error such as this.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >