Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 76/1061 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Watching a directory using FileSystemWatcher

    - by rashim
    Hello I'm watching a directory using FileSystemWatcher. When a file is created into that directory - my watcher grabs and transfers it to the network drive. My problem is that when a Microsoft-office file is opened, a temporary file is created in the watched directory. I can't find a way to ignore these files and also could not find when I should move these file to the network drive.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • How to change the home directory of hudson?

    - by dfdfd
    I deployed hudson.war in the sun application server 9.1. I like to check how can i change the home directory of hudson to point to another directory becasue if using a .hudson directory is not ok as i have very little diskspace left in main disk drive.

    Read the article

  • Copy files in folder up one directory in python

    - by Aaron Hoffman
    I have a folder with a few files that I would like to copy one directory up (this folder also has some files that I don't want to copy). I know there is the os.chdir("..") command to move me to the directory. However, I'm not sure how to copy those files I need into this directory. Any help would be greatly appreciated.

    Read the article

  • setting syntax on in vim with large C file makes complete very slow

    - by skeept
    when I have syntax on in a large C file (about 8000) lines the completion ctrl-p and ctrl-n are very slow (more than 20). When I turn syntax off then completion takes less than a second. Any ideas on how to solve this? Thanks! EDIT: I figured out a minimal way of reproducing this behaviour: with an empty .vimrc and .vim folder the only changed settings are :set syntax on :set foldmethod=syntax and a large C file to edit, completion (and even general editing) becomes very very slow.

    Read the article

  • C#: Custom assembly directory

    - by Svish
    Say we have an application which consists of one executable and 5 libraries. Regularly all of these will be contained in one directory and the libraries will be loaded from there. Is it possible to do so that I can have for example some of the libraries in one directory called Lib, and the rest in one called Lib2? So that the application directory would only contain the executable itself and the other assemblies would be contained in various logical directories. How can I do this? And I would like to know how to do the loading of the assemblies, but also how to make the building of the application put the assemblies in the right directory.

    Read the article

  • Maximum number of inodes in a directory?

    - by Dr. UNIX
    Is there a maximum number of inodes in a single directory? I have a directory of 2 million+ files and can't get an the ls command to work against that directory. So now I'm wondering if I've exceeded a limit on inodes in Linux. Is there a limit before a 2^64 numerical limit?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Download Large Files using java

    - by angelina
    Dear All, I M building a application in which i want to download large files on handset (mobile),but if size of file is large i m getting exception socket exception-broken pipe . inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); }

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • Streaming large result sets with MySQL

    - by configurator
    I'm developing a spring application that uses large MySQL tables. When loading large tables, I get an OutOfMemoryException, since the driver tries to load the entire table into application memory. I tried using statement.setFetchSize(Integer.MIN_VALUE); but then every ResultSet I open hangs on close(); looking online I found that that happens because it tries loading any unread rows before closing the ResultSet, but that is not the case since I do this: ResultSet existingRecords = getTableData(tablename); try { while (existingRecords.next()) { // ... } } finally { existingRecords.close(); // this line is hanging, and there was no exception in the try clause } The hangs happen for small tables (3 rows) as well, and if I don't close the RecordSet (which happened in one method) then connection.close() hangs.

    Read the article

  • Browser, upload large file

    - by Mike
    I'm looking for a way to allow a user to upload a large file (~1gb) to my unix server using a web page and browser. There are a lot of examples that illustrate how to do this with a traditional post request, however this doesn't seem like a good idea when the file is this large. I'm looking for recommendations on the best approach. Bonus points if the method includes a way of providing progress information to the user. For now security is not a major concern, as most users who will be using the service can be trusted. We can also assume that the connection between client and host will not be interrupted (or if it is they have to start over).

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • Secrets of delivering .NET size large products?

    - by Joan Venge
    In software companies I have seen it's really hard to work on very large products where everything depends on everything else. For instance Microsoft works on C#, F#, .NET, WPF, Visual Studio where these things are interconnected. I don't know how many people are involved, but if it's in 100s, how do they keep in sync with everything, so they design and implement features without conflicting with other dependencies and future plans of other products? I am wondering that if MS is able to do this, they must have a very good system. Any guidelines or secrets for MS or non-MS very large software product delivering?

    Read the article

  • Flattening out a lib directory using jarjar

    - by voodoogiant
    I'm trying to flatten out the lib directory of a project using jarjar, where it will produce a jar file with my main code and all my required libraries inside of it. I can get the project code into the jar, but I need to find a way to get every jar in the './lib/' directory extracted to the base directory of the final output jar. I don't want it flattened in the sense that I still want the package hiearchy preserved. I could manually list every jar file using zipfileset, but I was hoping to do it dynamically. I would also like to include any *.so files flattened into the base directory of the output jar so I can extract them into a temp dir easily without having to search through the jar. For example my lib directory... ./lib/library1.jar ./lib/library2.jar ./lib/foo/library3.jar ./lib/foo/bar.so would look like this when cracked open in the output jar file... /..library1_package_hierarchy../lib1.class /..library1_package_heirarchy../lib2.class ... (and so on) /..library2_package_hierarchy../lib1.class /..library2_package_heirarchy../lib2.class ... (and so on) /..library3_package_hierarchy../lib1.class <-- foo gone /..library3_package_heirarchy../lib2.class <-- foo gone /bar.so <-- foo gone

    Read the article

  • fopen doesn’t create file in the current directory

    - by indira
    I have created a console application in VS2010 and I want to create a file in the current directory where the exe runs. I used the following code fp = fopen("Pkts.csv", "w+"); But file is not getting created in the current directory. But when I specifies the path as fp = fopen("C:\\Windows\\Pkts.csv", "w+"); the file gets created in the path specified. How to create the file in the current directory?

    Read the article

  • Can you call Directory.GetFiles() with multiple filters?

    - by Jason Z
    I am trying to use the Directory.GetFiles() method to retrieve a list of files of multiple types, such as mp3's and jpg's. I have tried both of the following with no luck: Directory.GetFiles("C:\\path", "*.mp3|*.jpg", SearchOption.AllDirectories); Directory.GetFiles("C:\\path", "*.mp3;*.jpg", SearchOption.AllDirectories); Is there a way to do this in one call?

    Read the article

  • Get home directory in Linux, C++

    - by Alex Farber
    I need a way to get user home directory in C++ program running on Linux. If the same code works on Unix, it would be nice. I don't want to use HOME environment value. AFAIK, root home directory is /root. Is it OK to create some files/folders in this directory, in the case my program is running by root user?

    Read the article

  • Comparing large strings in JavaScript with a hash

    - by user4815162342
    I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save. So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents. Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >