Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 177/3118 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • time on files differ by 1 sec. FAIL Robocopy sync

    - by csmba
    I am trying to use Robocopy to sync (/IMG) a folder on my PC and a shared network drive. The problem is that the file attributes differ by 1 sec on both locations (creation,modified and access). So every time I run robocopy, it syncs the file again... BTW, problem is the same if I delete the target file and robocopy it from new... still, new file has 1 sec different properties. Env Details: Source: Win 7 64 bit Target: WD My Book World Edition NAS 1TB which takes its time from online NTP pool.ntp.org (I don't know if file system is FAT or not)

    Read the article

  • Tools for viewing logs of unlimited size

    - by jkff
    It's no secret that application logs can go well beyond the limits of naive log viewers, and the desired viewer functionality (say, filtering the log based on a condition, or highlighting particular message types, or splitting it into sublogs based on a field value, or merging several logs based on a time axis, or bookmarking etc.) is beyond the abilities of large-file text viewers. I wonder: Whether decent specialized applications exist (I haven't found any) What functionality might one expect from such an application? (I'm asking because my student is writing such an application, and the functionality above has already been implemented to a certain extent of usability)

    Read the article

  • MySQL Update Statement + File Upload

    - by Jason Sweet
    Greetings! Been staring at this all day and can't seem to figure out why my update statement fails to update the field 'image_filename': $fileName = $_FILES['image_filename']; if($fileName["name"] <> ""){ $imageFile = $fileName['name']; $destination = "../../../../assets/resources/images/".$fileName['name']; move_uploaded_file($fileName['name'], $destination); } $updateSQL = sprintf("UPDATE content SET image_filename='$imageFile' WHERE id=%s", GetSQLValueString($_POST['resource_id'], "int")); mysql_select_db($database_conn_talent, $conn_talent); $Result1 = mysql_query($updateSQL, $conn_talent) or die(mysql_error()); Can a SQL pro tell me what I"m missing? Much thanks in advance for your feedback!

    Read the article

  • How to manage large amounts of delegates and usercallbacks in C# async http library

    - by Tyler
    I'm coding a .NET library in C# for communicating with XBMC via its JSON RPC interface using HTTP. I coded and released a preliminary version but everything is done synchronously. I then recoded the library to be asynchronous for my own purposes as I was/am building an XBMC remote for WP7. I now want to release the new async library but want to make sure it's nice and tidy before I do. Due to the async nature a user initiates a request, supplies a callback method that matches my delegate and then handles the response once it's been received. The problem I have is that within the library I track a RequestState object for the lifetime of the request, it contains the http request/response as well as the user callback etc. as member variables, this would be fine if only one type of object was coming back but depending on what the user calls they may be returned a list of songs or a list of movies etc. My implementation at the moment uses a single delegate ResponseDataRecieved which has a single parameter which is a simple Object - As this has only be used by me I know which methods return what and when I handle the response I cast said object to the type I know it really is - List, List etc. A third party shouldn't have to do this though - The delegate signature should contain the correct type of object. So then I need a delegate for every type of response data that can be returned to the third party - The specific problem is, how do I handle this gracefully internally - Do I have a bunch of different RequestState objects that each have a different member variable for the different delegates? That doesn't "feel" right. I just don't know how to do this gracefully and cleanly.

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • PHP: Fastest way possible to read contents of a file.

    - by SoLoGHoST
    Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible. Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?

    Read the article

  • Enumerating large (20-digit) [probable] prime numbers

    - by Paul Baker
    Given A, on the order of 10^20, I'd like to quickly obtain a list of the first few prime numbers greater than A. OK, my needs aren't quite that exact - it's alright if occasionally a composite number ends up on the list. What's the fastest way to enumerate the (probable) primes greater than A? Is there a quicker way than stepping through all of the integers greater than A (other than obvious multiples of say, 2 and 3) and performing a primality test for each of them? If not, and the only method is to test each integer, what primality test should I be using?

    Read the article

  • Documented process for using facebook connect for the iPhone to upload photos

    - by Corey Floyd
    After looking I did come accross this post on the facebook forums: link They are feeding the facebook object a UIImage. That seems logical, but where is this documented? The API documentation is generalized to all platforms. Where are the iPhone specific requirements for arguments and their data types? Thanks **Update***** I still have not came across any API docs pertaining to Cocoa. I did, however, gather the information I needed by piecing together forum information, Facebook sample code, and some glue. Hopefully they'll issue something a little more concrete over the next few months.

    Read the article

  • Representing a very large array of bits in little memory

    - by user614624
    Hello, I would like to represent a structure containing 250 M states(1 bit each) somehow into as less memory as possible (100 k maximum). The operations on it are set/get. I cold not say that it's dense or sparse, it may vary. The language I want to use is C. I looked at other threads here to find something suitable also. A probabilistic structure like Bloom filter for example would not fit because of the possible false answers. Any suggestions please?

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Large Table in iFrame crashes IE8

    - by Brian
    I have a page with an iFrame whose source is an ashx page. The handler takes in 3 arguments through the query string and generates a text/html response containing a table. When the table gets 1700 rows it crashes the IE8 browser. The browser freezes and returns a null reference error. If I take the html that is being rendered and place it inside a DIV on the page it renders fine in IE8. Any suggestions?

    Read the article

  • Looking for nice Javascript/jQuery code for displaying large tables

    - by misha-moroshko
    I have an HTML table which may contain thousands of rows (number of columns is not a problem here). I would like to be able to browse this table easily and be able to do the following: Decide how many rows will be presented Jump to the next/previous X number of rows Scroll the table using the scroll bars to any desired line Be able to customize/extend easily this Javascript/jQuery code Has anyone seen something similar ? Thank you very much !

    Read the article

  • how to handle large dataset like sproutcore

    - by Nik
    Hello all, I really don't have any substantial code to show here, actually, that's kinda why I am writing: I looked at the SproutCore demo, especially the Collection demo, on http://demo.sproutcore.com/sample_controls/, and am amazed by its loading 200,000 records to the page so easily. I tried using Rails to provide 200,000 records and in a completely blank HTML page with <% @projects.each do |p| % <%= p.title % <% end % that freezes the browser for seconds on my m1530 laptop with 4gb ram and t7700 256gb ssd. Yet the sproutcore demo does not freeze and takes less than 3 seconds to load. What do you think the one technique they are using to enable this is? Thanks!

    Read the article

  • Locking DB w/ Large Reads (Ruby-on-Rails/Heroku)

    - by Splashlin
    Currently I have a Web API running on Heroku that is constantly writing information we're collecting from other data sources (currently theres about half a GB of data and it's growing very quickly). We're looking to add a reporting system on top of the current database that we can use to extract useful information out of the DB. The problem is that when we're running reports we're locking the DB and any other sites communicating with the DB are timing out. Does anyone have any solutions on how to solve this type of issue? Amazon RDS seems to have some interesting stuff with database replication but I don't know if that will solve my problems. Any advice would be greatly appreciated. Thanks

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • Ideal way/architecture to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Java combine parents of two large inheritance chains

    - by Soylent Green
    I have two parent classes in a huge project, let's say ClassA and ClassB. Each class has many subclasses, which in turn have many subclasses, which in turn have many subclasses, etc. My task is to "marry" these two "families" so that both inherit from a SINGLE parent. I need to essentially make ClassA and ClassB one class (parent) to both of their combined subclasses (children). ClassA and ClassB both currently implement Serializable. I am currently trying to make both inheritance chains inherit from ClassA, and then copy all functions and data members from ClassB into ClassA. This is tedious, and I think a terrible solution. What would be the CORRECT way to solve this problem?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >