Search Results

Search found 4715 results on 189 pages for 'ram bhat'.

Page 175/189 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Boost threading/mutexs, why does this work?

    - by Flamewires
    Code: #include <iostream> #include "stdafx.h" #include <boost/thread.hpp> #include <boost/thread/mutex.hpp> using namespace std; boost::mutex mut; double results[10]; void doubler(int x) { //boost::mutex::scoped_lock lck(mut); results[x] = x*2; } int _tmain(int argc, _TCHAR* argv[]) { boost::thread_group thds; for (int x = 10; x>0; x--) { boost::thread *Thread = new boost::thread(&doubler, x); thds.add_thread(Thread); } thds.join_all(); for (int x = 0; x<10; x++) { cout << results[x] << endl; } return 0; } Output: 0 2 4 6 8 10 12 14 16 18 Press any key to continue . . . So...my question is why does this work(as far as i can tell, i ran it about 20 times), producing the above output, even with the locking commented out? I thought the general idea was: in each thread: calculate 2*x copy results to CPU register(s) store calculation in correct part of array copy results back to main(shared) memory I would think that under all but perfect conditions this would result in some part of the results array having 0 values. Is it only copying the required double of the array to a cpu register? Or is it just too short of a calculation to get preempted before it writes the result back to ram? Thanks.

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Which way to store this data is effective?

    - by Tattat
    I am writing a game, which need a map, and I want to store the map. The first thing I can think of, is using a 2D-array. But the problem is what data should I store in the 2D-array. The player can tap different place to have different reaction. So, I am thinking store a 2D-array with objects, when player click some position, and I find it in the array, and use the object in that array to execute a cmd. But I have a concern that storing lots of object may use lots of memory. So, I am think storing char/int only. But it seems that not enough for me. I want to store the data like that: { Type:1 Color:Green } No matter what color is, if they are all type 1, the have same reactions in logic, but the visual effect is based on the color. So, it is not easy to store using a prue char/int data, unless I make something like this: 1-5 --> all type 1. 1=color green , 2=color red, 3 = color yellow.... ... 6-10 --> all type 2. 2 = color green, 2 = color red ... ... So, do you have any ideas on how to minimize the ram use, but also easy for me to read... ...thx

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Handling large (object) datasets with PHP

    - by Aron Rotteveel
    I am currently working on a project that extensively relies on the EAV model. Both entities as their attributes are individually represented by a model, sometimes extending other models (or at least, base models). This has worked quite well so far since most areas of the application only rely on filtered sets of entities, and not the entire dataset. Now, however, I need to parse the entire dataset (IE: all entities and all their attributes) in order to provide a sorting/filtering algorithm based on the attributes. The application currently consists of aproximately 2200 entities, each with aproximately 100 attributes. Every entity is represented by a single model (for example Client_Model_Entity) and has a protected property called $_attributes, which is an array of Attribute objects. Each entity object is about 500KB, which results in an incredible load on the server. With 2000 entities, this means a single task would take 1GB of RAM (and a lot of CPU time) in order to work, which is unacceptable. Are there any patterns or common approaches to iterating over such large datasets? Paging is not really an option, since everything has to be taken into account in order to provide the sorting algorithm.

    Read the article

  • Periodic GPU performance problem

    - by Peter Lillevold
    Hi folks! I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts: It (currently) happens on my machine only Removing everything from my render loop does not improve the problem In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present. Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop. PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing? Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM. Note2: also posted this question on the XNA Creators forums here.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • Software usage analytics in C#

    - by TiernanO
    I have a project i am working on currently and would like to implement some sort of software tracking in the code. ideally, stuff like how often its launched. how long it runs for, feature tracking, etc. I already use Exceptioneer for unhandled exceptions, but would like something similar for usage tracking. this data should all be anonymous and ideally run as a service by someone else. and i would like to give the users the option to turn it off, if they so wish to... So, is this something i should implement myself, or are there third parties out there that do this sort of things? i know it might be a sticky area, but i have seen stats about iPhone app usage. they do it, so why cant we? (if the user agrees, of course) [Update] Based on the comments, i should have been more clear. this is a Winforms .NET 4. application, though i am thinking of updating it later with WCF. i would only be tracking my own application, though i would also want to know minor information about environment (Windows OS Version, SP, maybe proc and ram...)

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Emulating a computer running MS-DOS

    - by Richard
    Writing emulators has always fascinated me. Now I want to write an emulator for an IBM PC and run MS-DOS on it (I've got the floppy image files). I have good experience in C++ and C and basic knowledge of assembler and the architecture of a CPU. I also know that there are thousands of emulators out there doing exactly what I want to do, but I'd be doing this for pure joy only. How much work do I have to expect? (If my goal is to boot DOS and create a text file with it, all emulated) What CPU should I emulate ? Where can I find documentation on how the machine code is organized and which opcodes mean what, so I can unpack and execute them correctly with my emulator? Does MS-DOS still run on the newest generations of processors? Would it theoretically be able to natively run on a 64-bit AMD Phenom 2 processor w/ a modern mainboard, HDD, RAM, etc.? What else, besides emulating the CPU, could be an important factor (in terms of difficulty)? I would only aim for outputting / inputting text to the system via the host system's console, no sound or other more advanced IO etc. Have you written an emulator yet? What was your first one for? How hard was it? Do you have any special tips for me? Thanks in advance

    Read the article

  • CPU overheating because of Delphi IDE

    - by Altar
    I am using Delphi 7 but I have trialed the Delphi 2005 - 2010 versions. In all these new versions my CPU utilization is 50% (one core is 100%, the other is "relaxed") when Delphi IDE is visible on screen. It doesn't happen when the IDE is minimized. My computer is overheating because of this. Any hints why this happens? It looks like if I want to upgrade to Delphi 2010, I need to upgrade my cooling system first. And I am a bit lazy about that, especially that I want to discahrge my computer and buy a new one (in the next 6 months) - probably I will have to buy a Win 7 license too. Edit: My CPU is AMD Dual Core 4600+. 4GB RAM Overheating means that temperature of the HDDs is raising because the CPU. Edit: Comming to a possible solution I just deleted all HKCU/CodeGear key and let started Delphi as "new". Guess what? The CPU utilization is 0%. I will investigate more.

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Holding variables in memory, C++

    - by b-gen-jack-o-neill
    Today something strange came to my mind. When I want to hold some string in C (C++) the old way, without using string header, I just create array and store that string into it. But, I read that any variable definition in C in local scope of function ends up in pushing these values onto the stack. So, the string is actually 2* bigger than needed. Because first, the push instructions are located in memory, but then when they are executed (pushed onto the stack) another "copy" of the string is created. First the push instructions, than the stack space is used for one string. So, why is it this way? Why doesn't compiler just add the string (or other variables) to the program instead of creating them once again when executed? Yes, I know you cannot just have some data inside program block, but it could just be attached to the end of the program, with some jump instruction before. And than, we would just point to these data? Because they are stored in RAM when the program is executed. Thanks.

    Read the article

  • What is the correct install process to setup Node.js with Windows Azure Emulator

    - by PazoozaTest Pazman
    This question is related to this question: Node.js running under IIS Express Keeps Crashing to which I need help with reinstalling and getting node.js up and running in windows emulator working. Hello I am reinstalling my machine: Toshiha Laptop 2 GB Ram 32 bit processor What is the correct procedure from start to finish to get node.js development working, so far nothing has worked and the emulator (IIS Express) worker processor keeps crashing. No matter how many instances they all end up crashing. Up until two weeks ago my node development was working fine, but I had to do a reinstall, and since then I haven't been doing any node.js development on windows emulator because the latest June 2012 Azure SDK for Node.js is buggy. These are the steps I have taken: 1) Reformat HD 2) Insert Windows 7 N SP1 CD 3) Reboot machine into CD installation 4) Follow and wait until Windows 7 installed 5) Run Add/Remove programs + enable IIS + IIS management tools 6) Run Windows Update (installed about 53 updates) 7) Go here http://www.windowsazure.com/en-us/develop/nodejs/ 8) Click Windows Installer June 2012 and install Windows Azure SDK for Node.js - June 2012 9) Run Azure Powershell 10) Navigate to c:\node\testSite\webrole1 11) launch site: start-azureemulator -launch 12) Play around on website (then crash!) Problem signature: Problem Event Name: APPCRASH Application Name: iisexpress.exe Application Version: 8.0.8298.0 Application Timestamp: 4f620349 Fault Module Name: iiscore.dll Fault Module Version: 8.0.8298.0 Fault Module Timestamp: 4f63b65c Exception Code: c0000005 Exception Offset: 00021767 OS Version: 6.1.7601.2.1.0.256.28 Locale ID: 1033 Additional Information 1: f66d Additional Information 2: f66d807b515d6b2dc6f28f66db769a01 Additional Information 3: 7b2f Additional Information 4: 7b2f6797d07ebc2c23f2b227e779722e Am I missing a step in my resintall process? Do I have all the required files to do node.js windows azure emulator development? Why is IIS Express crashing all the time? Can I still do node.js windows azure emulator development without using IIS Express and use my local Windows 7 N (SP1) IIS 7.x that comes shipped?

    Read the article

  • How to check volume is mounted or not using python with a dynamic volume name

    - by SR query
    import subprocess def volumeCheck(volume_name): """This function will check volume name is mounted or not. """ volume_name = raw_input('Enter volume name:') volumeCheck(volume_name) print 'volume_name=',volume_name p = subprocess.Popen(['df', '-h'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) p1, err = p.communicate() pattern = p1 if pattern.find(volume_name): print 'volume found' else: print 'volume not found' While running i always got wrong result "volume found". root@sr-query:/# df -h Filesystem Size Used Avail Use% Mounted on rootfs 938M 473M 418M 54% / /dev/md0 938M 473M 418M 54% / none 250M 4.9M 245M 2% /dev /dev/md2 9.7M 1.2M 8.0M 13% /usr/config /dev/md7 961M 18M 895M 2% /downloads tmpfs 250M 7.9M 242M 4% /var/volatile tmpfs 250M 0 250M 0% /dev/shm tmpfs 250M 0 250M 0% /media/ram **/dev/mapper/vg9-lv9 1016M 65M 901M 7% /VolumeData/sp /dev/mapper/vg10-lv10 1016M 65M 901M 7% /VolumeData/cp** root@sr-query:/# root@sr-query:/# root@sr-query:/# python volume_check.py Enter volume name:raid_10volume volume_name= raid_10volume **volume found** root@sr-query:/# I enterd raid_10volume its not listed here please check the df -h command out put(only 2 volume there sp and cp) , then how it reached else part. what is wrong in my code? Thanks in advance. is there any other way to do this work ! ?

    Read the article

  • Error in My Add button SQL Server Management Studio And Visual Basic 2010

    - by user2882523
    Here is the thing i cant use insert querry in my code there is an error in my sqlcommand that says the ExecuteNonQuery() not match with the values blah blah here is my code Dim con As New SqlClient.SqlConnection("Server=.\SQLExpress;AttachDBFilename=C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\Finals.mdf;Database=Finals;Trusted_Connection=Yes;") Dim cmd As New SqlClient.SqlCommand cmd.Connection = con cmd.CommandText = "Insert Into [Finals].[dbo].[Nokia] Values ('" & Unit.Text & "'),('" & Price.Text & " '),('" & Stack.Text & "'),('" & Processor.Text & "'),('" & Size.Text & "'),('" & RAM.Text & "'),('" & Internal.Text & "'),('" & ComboBox1.Text & "')" con.Open() cmd.ExecuteNonQuery() con.Close() } the problem is the cmd.CommandText can anyone pls help me

    Read the article

  • How to optimize this simple function which translates input bits into words?

    - by psihodelia
    I have written a function which reads an input buffer of bytes and produces an output buffer of words where every word can be either 0x0081 for each ON bit of the input buffer or 0x007F for each OFF bit. The length of the input buffer is given. Both arrays have enough physical place. I also have about 2Kbyte free RAM which I can use for lookup tables or so. Now, I found that this function is my bottleneck in a real time application. It will be called very frequently. Can you please suggest a way how to optimize this function? I see one possibility could be to use only one buffer and do in-place substitution. void inline BitsToWords(int8 *pc_BufIn, int16 *pw_BufOut, int32 BufInLen) { int32 i,j,z=0; for(i=0; i<BufInLen; i++) { for(j=0; j<8; j++, z++) { pw_BufOut[z] = ( ((pc_BufIn[i] >> (7-j))&0x01) == 1? 0x0081: 0x007f ); } } } Please do not offer any compiler specific or CPU/Hardware specific optimization, because it is a multi-platform project.

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Developers: How does BitLocker affect performance?

    - by Chris
    I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think. So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad? My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >