Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 50/1833 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Project design / FS layout for large django projects

    - by rcreswick
    What is the best way to layout a large django project? The tutuorials provide simple instructions for setting up apps, models, and views, but there is less information about how apps and projects should be broken down, how much sharing is allowable/necessary between apps in a typical project (obviously that is largely dependent on the project) and how/where general templates should be kept. Does anyone have examples, suggestions, and explanations as to why a certain project layout is better than another? I am particularly interested in the incorporation of large numbers of unit tests (2-5x the size of the actual code base) and string externalization / templates.

    Read the article

  • Finding Errant Output to System.out in Large Java Program

    - by SvrGuy
    Hi, We have a large java code base [~1 M Lines]. Buried (somewhere) in the code base is some old debug output to System.out that we want to remove (its cluttering things up). The problem is: out code base is so large that we can't easily find where the output is coming from. What we want is a way to see where System.out.println is getting called from (like a stack trace from an exception or some such). Its not suitable to debugging -- the errant output is coming from some errant thread somewhere etc. Any ideas on how to track the source of this errant output down? PS: 99.99% of calls to System.out are legit, and we have thousands of them, so simply searching the code base for System.out calls is not a solution!

    Read the article

  • How do I define a template class and divide it into multiple files?

    - by hkBattousai
    I have written a simple template class for test purpose. It compiles without any errors, but when I try to use it in main(), it give some linker errors. main.cpp #include <iostream> #include "MyNumber.h" int wmain(int argc, wchar_t * argv[]) { MyNumber<float> num; num.SetValue(3.14); std::cout << "My number is " << num.GetValue() << "." << std::endl; system("pause"); return 0; } MyNumber.h #pragma once template <class T> class MyNumber { public: MyNumber(); ~MyNumber(); void SetValue(T val); T GetValue(); private: T m_Number; }; MyNumber.cpp #include "MyNumber.h" template <class T> MyNumber<T>::MyNumber() { m_Number = static_cast<T>(0); } template <class T> MyNumber<T>::~MyNumber() { } template <class T> void MyNumber<T>::SetValue(T val) { m_Number = val; } template <class T> T MyNumber<T>::GetValue() { return m_Number; } When I build this code, I get the following linker errors: Error 7 Console Demo C:\Development\IDE\Visual Studio 2010\SAVE\Grand Solution\X64\Debug\Console Demo.exe 1 error LNK1120: 4 unresolved externals Error 3 Console Demo C:\Development\IDE\Visual Studio 2010\SAVE\Grand Solution\Console Demo\main.obj error LNK2019: unresolved external symbol "public: __cdecl MyNumber::~MyNumber(void)" (??1?$MyNumber@M@@QEAA@XZ) referenced in function wmain Error 6 Console Demo C:\Development\IDE\Visual Studio 2010\SAVE\Grand Solution\Console Demo\main.obj error LNK2019: unresolved external symbol "public: __cdecl MyNumber::MyNumber(void)" (??0?$MyNumber@M@@QEAA@XZ) referenced in function wmain Error 4 Console Demo C:\Development\IDE\Visual Studio 2010\SAVE\Grand Solution\Console Demo\main.obj error LNK2019: unresolved external symbol "public: float __cdecl MyNumber::GetValue(void)" (?GetValue@?$MyNumber@M@@QEAAMXZ) referenced in function wmain Error 5 Console Demo C:\Development\IDE\Visual Studio 2010\SAVE\Grand Solution\Console Demo\main.obj error LNK2019: unresolved external symbol "public: void __cdecl MyNumber::SetValue(float)" (?SetValue@?$MyNumber@M@@QEAAXM@Z) referenced in function wmain But, if I leave main() empty, I don't get any linker errors. What is wrong with my template class? What am I doing wrong?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Tool to diagonalize large matrices

    - by Xodarap
    I want to compute a diffusion kernel, which involves taking exp(b*A) where A is a large matrix. In order to play with values of b, I'd like to diagonalize A (so that exp(A) runs quickly). My matrix is about 25k x 25k, but is very sparse - only about 60k values are non-zero. Matlab's "eigs" function runs of out memory, as does octave's "eig" and R's "eigen." Is there a tool to find the decomposition of large, sparse matrices? Dunno if this is relevant, but A is an adjacency matrix, so it's symmetric, and it is full rank.

    Read the article

  • Java regex replace multiple file paths in a large String

    - by Joe Goble
    So a Regex pro I am not, and I'm looking for a good way to do this. I have a large string which contains a variable number <img> tags. I need to change the path on all of these images to images/. The large string also contains other stuff not just these img's. <img src='http://server.com/stuff1/img1.jpg' /> <img src='http://server.com/stuff2/img2.png' /> Replacing the server name with a ReplaceAll() I could do, it's the variable path in the middle I'm clueless on how to include. It doesn't necessarily need to be a regex, but looping through the entire string just seems wasteful.

    Read the article

  • Looking for easy way to analyze var_dump (PHP) on large objects

    - by sdek
    I know (PHP's) var_dump is supposed to be "human readable" and all, but analyzing large objects is just a pain in the neck. I am struggling to make sense of a few of the large objects that are being passed around in a script that we are running. (I know that using xdebug with and IDE is a good idea, but I have not been able to get xdebug to run on this project for some reason - several days lost, ugh). Any ideas on how I can easily digest the contents of a really big var_dump? Any ideas are welcome... Although I am hoping that there is something similar to Thomas Frank's JSON tool (where you just put some code in and it gives a nice graphical representation).

    Read the article

  • Javascript: Passing large objects or strings between function considered a bad practice

    - by Mr. Smee
    Is it considered a bad practice to pass around a large string or object (lets say from an ajax response) between functions? Would it be beneficial in any way save the response in a variable and keep reusing that variable? So in the code it would be something like this: var response; $.post(url, function(resp){ response = resp; }) function doSomething() { // do something with the response here } vs $.post(url, function(resp){ doSomething(resp); }) function doSomething(resp) { // do something with the resp here } Assume resp is a large object or string and it can be passed around between multiple functions.

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Downloading large files with AFNetworking

    - by goodfella
    I'm trying to implement downloading of a large file and show to user current progress, but block in: -[AFURLConnectionOperation setDownloadProgressBlock:] returns incorrect bytesRead and totalBytesRead values (they are smaller than they should be). For example: If I have a 90MB file and when it downloads completely, latest block invocation in setDownloadProgressBlock: gives me totalBytesRead value about 30MB. On other side, if file is 2MB large, latest block invocation gives correct totalBytesRead 2MB value. AFNetworking is updated to the latest version from github. If AFNetworking can't do it correctly, what solution can I use? Edit: I've determined that even if file is not downloaded completely (and this happens every time with relatively big file) AFNetworking calls success block in: -[AFHTTPRequestOperation setCompletionBlockWithSuccess:failure] I asked a similar question here about this situation, but didn't get any answers. I can check in code downloaded and real file sizes, but AFNetworking has no API for continuation of partial download.

    Read the article

  • large test data for knapsack problem

    - by user347918
    i am researcher student. I am searching large data for knapsack problem. I wanted test my algorithm for knapsack problem. But i couldn't find large data. I need data has 1000 item and capacity is no matter. The point is item as much as huge it's good for my algorithm. Is there any huge data available in internet. Does anybody know please guys i need urgent.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • setting syntax on in vim with large C file makes complete very slow

    - by skeept
    when I have syntax on in a large C file (about 8000) lines the completion ctrl-p and ctrl-n are very slow (more than 20). When I turn syntax off then completion takes less than a second. Any ideas on how to solve this? Thanks! EDIT: I figured out a minimal way of reproducing this behaviour: with an empty .vimrc and .vim folder the only changed settings are :set syntax on :set foldmethod=syntax and a large C file to edit, completion (and even general editing) becomes very very slow.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Should '#include' and 'using' statements be repeated in both header and implementation files (C++)?

    - by Dr. Monkey
    I'm fairly new to C++, but my understanding is that a #include statement will essentially just dump the contents of the #included file into the location of that statement. This means that if I have a number of '#include' and 'using' statements in my header file, my implementation file can just #include the header file, and the compiler won't mind if I don't repeat the other statements. What about people though? My main concern is that if I don't repeat the '#include', 'using', and also 'typedef' (now that I think of it) statements, it takes that information away from the file in which it's used, which could lead to confusion. I am just working on small projects at the moment where it won't really cause any issues, but I can imagine that in larger projects with more people working on them it could become a significant issue. An example follows: //Unit.h #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; class Unit { public: //public members private: //private members //unrelated side-question: should private members //even be included in the header file? } ; //Unit.cpp #include "Unit.h" //The following are all redundant from a compiler perspective: #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; //implementation goes here

    Read the article

  • Download Large Files using java

    - by angelina
    Dear All, I M building a application in which i want to download large files on handset (mobile),but if size of file is large i m getting exception socket exception-broken pipe . inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); }

    Read the article

  • Streaming large result sets with MySQL

    - by configurator
    I'm developing a spring application that uses large MySQL tables. When loading large tables, I get an OutOfMemoryException, since the driver tries to load the entire table into application memory. I tried using statement.setFetchSize(Integer.MIN_VALUE); but then every ResultSet I open hangs on close(); looking online I found that that happens because it tries loading any unread rows before closing the ResultSet, but that is not the case since I do this: ResultSet existingRecords = getTableData(tablename); try { while (existingRecords.next()) { // ... } } finally { existingRecords.close(); // this line is hanging, and there was no exception in the try clause } The hangs happen for small tables (3 rows) as well, and if I don't close the RecordSet (which happened in one method) then connection.close() hangs.

    Read the article

  • Browser, upload large file

    - by Mike
    I'm looking for a way to allow a user to upload a large file (~1gb) to my unix server using a web page and browser. There are a lot of examples that illustrate how to do this with a traditional post request, however this doesn't seem like a good idea when the file is this large. I'm looking for recommendations on the best approach. Bonus points if the method includes a way of providing progress information to the user. For now security is not a major concern, as most users who will be using the service can be trusted. We can also assume that the connection between client and host will not be interrupted (or if it is they have to start over).

    Read the article

  • How do I copy files into an existing JAR file with Ant?

    - by Blue
    I have a project that needs to access resources within its own JAR file. When I create the JAR file for the project, I would like to copy a directory into that JAR file (I guess the ZIP equivalent would be "adding" the directory to the existing ZIP file). I only want the copy to happen after the JAR has been created (and I obviously don't want the copy to happen if I clean and delete the JAR file). Currently the build file looks like this: <?xml version="1.0" encoding="UTF-8"?> <project name="foobar" basedir=".." default="jar"> <!-- project-specific properties --> <property name="project.path" value="my/project/dir/foobar" /> <patternset id="project.include"> <include name="${project.path}/**" /> </patternset> <patternset id="project.jar.include"> <include name="${project.path}/**" /> </patternset> <import file="common-tasks.xml" /> <property name="jar.file" location="${test.dir}/foobar.jar" /> <property name="manifest.file" location="misc/foobar.manifest" /> </project> Some of the build tasks are called from another file (common-tasks.xml), which I can't display here. Thanks.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >