Search Results

Search found 7118 results on 285 pages for 'sam ram san'.

Page 140/285 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Setuptools Python namespace package in /opt

    - by Samuel Taylor
    I'm trying to get my app to install in /opt/[app_name] using setuptools. My app uses a namespace package. To install I run sudo python setup.py install --prefix=/opt/[app_name]/ --install-lib=/opt/[app_name]/ --install-scripts=/opt/[app_name]/ When I install it this was setuptools does not copy init.py in to my namespace package so when I come to run my app, python does not treat it as a package and I get import errors. if I create the init.py file my app works fine. How do I get setuptool to copy over the init.py file when using --install-lib and --prefix? Thanks Sam

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • how to profile multi threaded c++ app on linux?

    - by anon
    I used to do all my linux profiling with gprof. However, with my multi threaded app, it's output appears to be inconsistent. Now, I dug this up http://sam.zoy.org/writings/programming/gprof.html howver, it's from a long time ago -- and in my gprof output, it appears my gprof is listing functions used by non-main threads. So, my questions are: 1) in 2010, can I easily use gprof to profile multi threaded linux c++ apps? (ubuntu 9.10) 2) what other tools should I look into for profiling? thanks!

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • postgres counting one record twice if it meets certain criteria

    - by Dashiell0415
    I thought that the query below would naturally do what I explain, but apparently not... My table looks like this: id | name | g | partner | g2 1 | John | M | Sam | M 2 | Devon | M | Mike | M 3 | Kurt | M | Susan | F 4 | Stacy | F | Bob | M 5 | Rosa | F | Rita | F I'm trying to get the id where either the g or g2 value equals 'M'... But, a record where both the g and g2 values are 'M' should return two lines, not 1. So, in the above sample data, I'm trying to return: $q = pg_query("SELECT id FROM mytable WHERE ( g = 'M' OR g2 = 'M' )"); 1 1 2 2 3 4 But, it always returns: 1 2 3 4

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Play Shoutcast MP3 radio stream with Python?

    - by Zachary Brown
    I have managed to create an online radio station using Shoutcast and Sam Broadcaster. Now, I am wanting to build my own player for that radio station. I am not sure where to begin, I have googled, but no luck. I am using Python 2.6 on Microsoft Windows. I have managed to capture the stream and save it as an MP# on the hard disk, just not sure what to do with it next. I tried playback of the file, but it always pulls up errors. This is the code I have so far: import urllib target = open("broadcast.mp3") conn = urllib.urlopen("http://78.159.104.175:80") while True: target.write(con.read(5200)) Any help would be greatly appreciated!

    Read the article

  • Is there an API to remotely read a Windows machine's audit configuration?

    - by JCCyC
    I need to know, for each subcategory, whether it'll be audited on success, on failure, both, or none. Below is an example of the information I need to collect. Can I get this through WMI? Or if not, by other means, assuming I have proper (admin) credentials to the target machine? Again, to clarify, it's not the event log I need to read, it's the logging configuration. <security_state_change>AUDIT_SUCCESS</security_state_change> <security_system_extension>AUDIT_NONE</security_system_extension> <system_integrity>AUDIT_SUCCESS_FAILURE</system_integrity> <ipsec_driver>AUDIT_NONE</ipsec_driver> <other_system_events>AUDIT_SUCCESS_FAILURE</other_system_events> <logon>AUDIT_SUCCESS</logon> <logoff>AUDIT_SUCCESS</logoff> <account_lockout>AUDIT_SUCCESS</account_lockout> <ipsec_main_mode>AUDIT_NONE</ipsec_main_mode> <ipsec_quick_mode>AUDIT_NONE</ipsec_quick_mode> <ipsec_extended_mode>AUDIT_NONE</ipsec_extended_mode> <special_logon>AUDIT_SUCCESS</special_logon> <other_logon_logoff_events>AUDIT_NONE</other_logon_logoff_events> <file_system>AUDIT_NONE</file_system> <registry>AUDIT_NONE</registry> <kernel_object>AUDIT_NONE</kernel_object> <sam>AUDIT_NONE</sam> <certification_services>AUDIT_NONE</certification_services> <application_generated>AUDIT_NONE</application_generated> <handle_manipulation>AUDIT_NONE</handle_manipulation> <file_share>AUDIT_NONE</file_share> <filtering_platform_packet_drop>AUDIT_NONE</filtering_platform_packet_drop> <filtering_platform_connection>AUDIT_NONE</filtering_platform_connection> <other_object_access_events>AUDIT_NONE</other_object_access_events> <sensitive_privilege_use>AUDIT_NONE</sensitive_privilege_use> <non_sensitive_privlege_use>AUDIT_NONE</non_sensitive_privlege_use> <other_privlege_use_events>AUDIT_NONE</other_privlege_use_events> <process_creation>AUDIT_NONE</process_creation> <process_termination>AUDIT_NONE</process_termination> <dpapi_activity>AUDIT_NONE</dpapi_activity> <rpc_events>AUDIT_NONE</rpc_events> <audit_policy_change>AUDIT_SUCCESS</audit_policy_change> <authentication_policy_change>AUDIT_SUCCESS</authentication_policy_change> <authorization_policy_change>AUDIT_NONE</authorization_policy_change> <mpssvc_rule_level_policy_change>AUDIT_NONE</mpssvc_rule_level_policy_change> <filtering_platform_policy_change>AUDIT_NONE</filtering_platform_policy_change> <other_policy_change_events>AUDIT_NONE</other_policy_change_events> <user_account_management>AUDIT_SUCCESS</user_account_management> <computer_account_management>AUDIT_NONE</computer_account_management> <security_group_management>AUDIT_SUCCESS</security_group_management> <distribution_group_management>AUDIT_NONE</distribution_group_management> <application_group_management>AUDIT_NONE</application_group_management> <other_account_management_events>AUDIT_NONE</other_account_management_events> <directory_service_access>AUDIT_NONE</directory_service_access> <directory_service_changes>AUDIT_NONE</directory_service_changes> <directory_service_replication>AUDIT_NONE</directory_service_replication> <detailed_directory_service_replication>AUDIT_NONE</detailed_directory_service_replication> <credential_validation>AUDIT_NONE</credential_validation> <kerberos_ticket_events>AUDIT_NONE</kerberos_ticket_events> <other_account_logon_events>AUDIT_NONE</other_account_logon_events>

    Read the article

  • How to add SQL elements to an array in PHP

    - by DanLeaningphp
    So this question is probably pretty basic. I am wanting to create an array from selected elements from a SQL table. I am currently using: $rcount = mysql_num_rows($result); for ($j = 0; $j <= $rcount; $j++) { $row = mysql_fetch_row($result); $patients = array($row[0] => $row[2]); } I would like this to return an array like this: $patients = (bob=>1, sam=>2, john=>3, etc...) Unfortunately, in its current form, this code is either copying nothing to the array or only copying the last element.

    Read the article

  • jQuery - iGoogle style interface

    - by samhamilton
    Hi all, I'm currently working on developing a customizable website layout using jQuery, so I can drag, drop, add and remove blocks of content... much like the iGoogle interface. I have my blocks dragging and dropping working with 3 columns of content My question is to do with adding and removing blocks If I use hide(), I can simply hide and show the blocks. If I use remove, I would have to append the list of blocks to load in a new block into a column. I'm not sure which is the best approach to take. I'd be grateful for any advice on best practise, i.e. to hide() or remove() and any other advice on building this kind of interface. Thank, Sam.

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • (Newbie) Amazon Web Services Apache Server

    - by Samnsparky
    Hello! I am trying to get a feel for the costs imposed by running apache on AWS continually. Assuming that the service is scarcely used, does anyone know how many cpu hours that would eat up in a month just by sitting there and running? I understand that this is slightly impractical but I am trying to figure out what the cost of entry is to deploy an application on this platform (as compared to GAE). I suspect it to be small but I would like to know. Thank you for your help, Sam

    Read the article

  • Trying to write up a C daemon, but don't know enough C to continue

    - by JamesM-SiteGen
    Okay, so I want this daemon to run in the background with little to no interaction. I plan to have it work with Apache, lighttpd, etc to send the session & request information to allow C to generate a website from an object DB, saving will have to be an option so, you can start the daemon with an existing DB but will not save to it unless you login to the admin area and enable, and restart the daemon. Summary of the daemon: Load a database from file. Have a function to restart the daemon. Allow Apache, lighttpd, etc to get necessary data about the request and session. A varible to allow the database to be saved to the file, otherwise it will only be stored in ram. If it is set to save back to the file, then only have neccessary data in ram. Use sql-light for the database file. Build a webpage from some template files. $(myVar) for getting variables. Get templates from a directory. ./templates/01-test/{index.html,template.css,template.js} Live version of the code and more information: http://typewith.me/YbGB1h1g1p Also I am working on a website CMS in php, but I am tring to switch to C as it is faster than php. ( php is quite fast, but the fact that making a few mySQL requests for every webpage is quite unefficent and I'm sure that it can be far better, so an object that we can recall data from in C would have to be faster ) P.S I am using Arch-Linux not MS-Windows, with the package group base-devel for the common developer tools such as make and makepgk. Edit: Oupps, forgot the question ;) Okay, so the question is, how can I turn this basic C daemon into a base to what I am attempting to do here?

    Read the article

  • Possibility to introduce iPad capability for iPhone-App via Update?

    - by samsam
    Hi there. There has been a lot of talk around iPad-Apps / Approval / Store-related Questions. I've recently built an App which I'm just about to release / send to Apple for approval. I'm thinking about developing a dedicated iPad-App as well. Now, in order to not have two seperate Apps in the Store (one for the iPhone, one for the iPad) i want to create an universal-App for both platforms. However, i couldn't figure out if it is possible to first send in my iPhone-only app and later publish an update that enables my app to run on both platforms. Does anyone have an idea on that topic? thanks in advance sam

    Read the article

  • C++ Sentinel/Count Controlled Loop beginning programming

    - by Bryan Hendricks
    Hello all this is my first post. I'm working on a homework assignment with the following parameters. Piecework Workers are paid by the piece. Often worker who produce a greater quantity of output are paid at a higher rate. 1 - 199 pieces completed $0.50 each 200 - 399 $0.55 each (for all pieces) 400 - 599 $0.60 each 600 or more $0.65 each Input: For each worker, input the name and number of pieces completed. Name Pieces Johnny Begood 265 Sally Great 650 Sam Klutz 177 Pete Precise 400 Fannie Fantastic 399 Morrie Mellow 200 Output: Print an appropriate title and column headings. There should be one detail line for each worker, which shows the name, number of pieces, and the amount earned. Compute and print totals of the number of pieces and the dollar amount earned. Processing: For each person, compute the pay earned by multiplying the number of pieces by the appropriate price. Accumulate the total number of pieces and the total dollar amount paid. Sample Program Output: Piecework Weekly Report Name Pieces Pay Johnny Begood 265 145.75 Sally Great 650 422.50 Sam Klutz 177 88.5 Pete Precise 400 240.00 Fannie Fantastic 399 219.45 Morrie Mellow 200 110.00 Totals 2091 1226.20 You are required to code, compile, link, and run a sentinel-controlled loop program that transforms the input to the output specifications as shown in the above attachment. The input items should be entered into a text file named piecework1.dat and the ouput file stored in piecework1.out . The program filename is piecework1.cpp. Copies of these three files should be e-mailed to me in their original form. Read the name using a single variable as opposed to two different variables. To accomplish this, you must use the getline(stream, variable) function as discussed in class, except that you will replace the cin with your textfile stream variable name. Do not forget to code the compiler directive #include < string at the top of your program to acknowledge the utilization of the string variable, name . Your nested if-else statement, accumulators, count-controlled loop, should be properly designed to process the data correctly. The code below will run, but does not produce any output. I think it needs something around line 57 like a count control to stop the loop. something like (and this is just an example....which is why it is not in the code.) count = 1; while (count <=4) Can someone review the code and tell me what kind of count I need to introduce, and if there are any other changes that need to be made. Thanks. [code] //COS 502-90 //November 2, 2012 //This program uses a sentinel-controlled loop that transforms input to output. #include <iostream> #include <fstream> #include <iomanip> //output formatting #include <string> //string variables using namespace std; int main() { double pieces; //number of pieces made double rate; //amout paid per amount produced double pay; //amount earned string name; //name of worker ifstream inFile; ofstream outFile; //***********input statements**************************** inFile.open("Piecework1.txt"); //opens the input text file outFile.open("piecework1.out"); //opens the output text file outFile << setprecision(2) << showpoint; outFile << name << setw(6) << "Pieces" << setw(12) << "Pay" << endl; outFile << "_____" << setw(6) << "_____" << setw(12) << "_____" << endl; getline(inFile, name, '*'); //priming read inFile >> pieces >> pay >> rate; // ,, while (name != "End of File") //while condition test { //begining of loop pay = pieces * rate; getline(inFile, name, '*'); //get next name inFile >> pieces; //get next pieces } //end of loop inFile.close(); outFile.close(); return 0; }[/code]

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • NSString stringWithContentsOfFile failing with what seems to be the wrong error code

    - by deanWombourne
    Hello. I'm trying to load a file into a string. Here is the code I'm using: NSError *error = nil; NSString *fullPath = [[NSBundle mainBundle] pathForResource:filename ofType:@"html"]; NSString *text = [NSString stringWithContentsOfFile:fullPath encoding:NSUTF8StringEncoding error:&error]; When passed in @"about" as the filename, it works absolutely fine, showing the code works. When passed in @"eula" as the filename, it fails with 'Cocoa error 258', which translates to NSFileReadInvalidFileNameError. However, if I swap the contents of the files over but keep the names the same, the other file fails proving there is nothing wrong with the filename, it's something to do with the content. The about file is fairly simple HTML but the eula file is a massive mess exported from Word by the legal department. Does anyone know of anything inside a HTML file that could cause this error to be raised? Much thanks, Sam

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Which open source repository or version control systems store files' original mtime, ctime and atime

    - by sampablokuper
    I want to create a personal digital archive. I want to be able to check digital files (some several years old, some recent, some not yet created) into that archive and have them preserved, along with their metadata such as ctime, atime and mtime. I want to be able to check these files out of that archive, modify their contents and commit the changes back to the archive, while keeping the earlier commits and their metadata intact. I want the archive to be very reliable and secure, and able to be backed up remotely. I want to be able to check files in and out of the archive from PCs running Linux, Mac OS X 10.5+ or Win XP+. I want to be able to check files in and out of the archive from PCs with RAM capacities lower than the size of the files. E.g. I want to be able to check in/out a 13GB file using a PC with 2GB RAM. I thought Subversion could do all this, but apparently it can't. (At least, it couldn't a couple of years ago and as far as I know it still can't; correct me if I'm wrong.) Is there a libre VCS or similar capable of all these things? Thanks for your help.

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • How do I replace NOT EXISTS with JOIN?

    - by YelizavetaYR
    I've got the following query: select distinct a.id, a.name from Employee a join Dependencies b on a.id = b.eid where not exists ( select * from Dependencies d where b.id = d.id and d.name = 'Apple' ) and exists ( select * from Dependencies c where b.id = c.id and c.name = 'Orange' ); I have two tables, relatively simple. The first Employee has an id column and a name column The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc). the data looks like this Employee table looks like this id | name ----------- 1 | Pat 2 | Tom 3 | Rob 4 | Sam Dependencies id | eid | Name -------------------- 1 | 1 | Orange 2 | 1 | Apple 3 | 2 | Strawberry 4 | 2 | Apple 5 | 3 | Orange 6 | 3 | Banana As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >