Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 520/626 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • C++ - Totally suspend windows application

    - by HardCoder1986
    Hello! I am developing a simple WinAPI application and started from writing my own assertion system. I have a macro defined like ASSERT(X) which would make pretty the same thing as assert(X) does, but with more information, more options and etc. At some moment (when that assertion system was already running and working) I realized there is a problem. Suppose I wrote a code that does some action using a timer and (just a simple example) this action is done while handling WM_TIMER message. And now, the situation changes the way that this code starts throwing an assert. This assert message would be shown every TIMER_RESOLUTION milliseconds and would simply flood the screen. Options for solving this situation could be: 1) Totally pause application running (probably also, suspend all threads) when the assertion messagebox is shown and continue running after it is closed 2) Make a static counter for the shown asserts and don't show asserts when one of them is already showing (but this doesn't pause application) 3) Group similiar asserts and show only one for each assert type (but this also doesn't pause application) 4) Modify the application code (for example, Get / Translate / Dispatch message loop) so that it suspends itself when there are any asserts. This is good, but not universal and looks like a hack. To my mind, option number 1 is the best. But I don't know any way how this can be achieved. What I'm seeking for is a way to pause the runtime (something similiar to Pause button in the debugger). Does somebody know how to achieve this? Also, if somebody knows an efficient way to handle this problem - I would appreciate your help. Thank you.

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • Outer product using CBLAS

    - by The Dude
    I am having trouble utilizing CBLAS to perform an Outer Product. My code is as follows: //===SET UP===// double x1[] = {1,2,3,4}; double x2[] = {1,2,3}; int dx1 = 4; int dx2 = 3; double X[dx1 * dx2]; for (int i = 0; i < (dx1*dx2); i++) {X[i] = 0.0;} //===DO THE OUTER PRODUCT===// cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasTrans, dx1, dx2, 1, 1.0, x1, dx1, x2, 1, 0.0, X, dx1); //===PRINT THE RESULTS===// printf("\nMatrix X (%d x %d) = x1 (*) x2 is:\n", dx1, dx2); for (i=0; i<4; i++) { for (j=0; j<3; j++) { printf ("%lf ", X[j+i*3]); } printf ("\n"); } I get: Matrix X (4 x 3) = x1 (*) x2 is: 1.000000 2.000000 3.000000 0.000000 -1.000000 -2.000000 -3.000000 0.000000 7.000000 14.000000 21.000000 0.000000 But the correct answer is found here: https://www.sharcnet.ca/help/index.php/BLAS_and_CBLAS_Usage_and_Examples I have seen: Efficient computation of kronecker products in C But, it doesn't help me because they don't actually say how to utilize dgemm to actually do this... Any help? What am I doing wrong here?

    Read the article

  • Update MySQl table onDrop?

    - by dougvt
    Hi all. I am writing a PHP/MySQL application (using CodeIgniter) that uses some jQuery functionality for dragging table rows. I have a table in which the user can drag rows to the desired order (kind of a queue for which I need to preserve the rank of each row). I've been trying to figure out how to (and whether I should) update the database each time the user drops a row, in order to simplify the UI and avoid a "Save" button. I have the jQuery working and can send a serialized list back to the server onDrop, but is it good design practice to run an update query this often? The table will usually have 30-40 rows max, but if the user drags row 1 far down the list, then potentially all the rows would need to be updated to update the rank field. I've been wondering whether to send a giant query to the server, to loop through the rows in PHP and update each row with its own Update query, to send a small serialized list to a stored procedure to let the server do all the work, or perhaps a better method I haven't considered. I've read that stored procedures in MySQL are not very efficient and use a separate process for each call. Any advice as to the right solution here? Thanks very much for your help!

    Read the article

  • How to implement a Linked List in Java?

    - by nbarraille
    Hello! I am trying to implement a simple HashTable in Java that uses a Linked List for collision resolution, which is pretty easy to do in C, but I don't know how to do it in Java, as you can't use pointers... First, I know that those structures are already implemented in Java, I'm not planning on using it, just training here... So I created an element, which is a string and a pointer to the next Element: public class Element{ private String s; private Element next; public Element(String s){ this.s = s; this.next = null; } public void setNext(Element e){ this.next = e; } public String getString(){ return this.s; } public Element getNext(){ return this.next; } @Override public String toString() { return "[" + s + "] => "; } } Of course, my HashTable has an array of Element to stock the data: public class CustomHashTable { private Element[] data; Here is my problem: For example I want to implement a method that adds an element AT THE END of the linked List (I know it would have been simpler and more efficient to insert the element at the beginning of the list, but again, this is only for training purposes). How do I do that without pointer? Here is my code (which could work if e was a pointer...): public void add(String s){ int index = hash(s) % data.length; System.out.println("Adding at index: " + index); Element e = this.data[index]; while(e != null){ e = e.getNext(); } e = new Element(s); } Thanks!

    Read the article

  • Editing XML file content with Python.

    - by Hooloovoo
    Hi, I am trying to use Python to read in an XML file containing some parameter names and values, e.g. ... <parameter name='par1'> <value>24</value> </parameter> <parameter name='par2'> <value>Blue/Red/Green</value> </parameter> ... and then pass it a dictionary with the parameter names {'par1':'53','par2':'Yellow/Pink/Black',...} and corresponding new values to replace the old ones in the XML file. The output should then overwrite the original XML file. At the moment I am converting the XML to a python dictionary and after some element comparison and regular expression handling, writing the output again in XML format. I am not too happy with this and was wondering whether anyone can recommend a more efficient way of doing it? Thanks.

    Read the article

  • Iterating Through N Level Children

    - by bobber205
    This seems like something neat that might be "built into" jQuery but I think it's still worth asking. I have a problem where that can easily be solved by iterating through all the children of a element. I've recently discovered I need to account for the cases where I would need to do a level or two deeper than the "1 level" (just calling .children() once) I am currently doing. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff } ); This is what I'm current doing. To go a second layer deep, I run another loop after doing stuff code for each element. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff jQuery.each(jQuery(element).children(), function(indexLevelTwo, elementLevelTwo) { //do stuff } ); } ); If I want to go yet another level deep, I have to do this all over again. This is clearly not good. I'd love to declare a "level" variable and then have it all take care of. Anyone have any ideas for a clean efficient jQueryish solution? Thanks!

    Read the article

  • How do I set the dimensions of a custom component defined in an ActionScript class?

    - by user339681
    I'm trying to set the height of a vertical bar (activityBar) but it does not appear to do anything. i have tried something similar with the whole component, but setting the dimensions does nothing (even in the mxml used to instantiate the class). Indeed, I've added transparent graphics just to give the component some dimensions I'm not sure what I'm doing wrong. It's something bad though; my approach seems dire. FYI: I'm trying to create a mic activity bar that will respond to the mic by simply setting the height of the activityBar child (which seems to me to be more efficient than redrawing the graphics each time). Thanks for your help! package components { import mx.core.UIComponent; public class MicActivityBar extends UIComponent { public var activityBar:UIComponent; // Constructor public function MicActivityBar() { super(); this.opaqueBackground = 0xcc4444; graphics.beginFill(0xcccccc, 0); graphics.drawRect(0,-15,5,30); graphics.endFill();// background for bar activityBar = new UIComponent(); activityBar.graphics.beginFill(0xcccccc, 0.8); activityBar.graphics.drawRect(0,-15,5,20); activityBar.graphics.endFill(); activityBar.height=10; addChild(activityBar); } } }

    Read the article

  • Determining difference in timestamps for two values in the same MySQL table

    - by JayRizzo03
    I am relatively new to programming in PHP, so I apologize if this is a rather simple question. I have a MySQL database table called MachineReports that contains the following values: ReportNum(primary key, auto increment), MachineID and Timestamp Here is some example data: |ReportNum | MachineID | Timestamp | |1 | AD3203 | 2012-11-18 06:32:28| |2 | AD3203 | 2012-11-19 04:00:15| |3 | BC4300 | 2012-11-19 04:00:15| What I am attempting to do is find the difference in timestamps in seconds for each machine ID by iterating over each row set. I am getting stuck on the best way to do this, however. Here is the code I've written so far: <?php include '../dbconnect/dbconnect.php'; $machineID=[]; //Get a list of all MachineIDs in the database foreach($dbh->query('SELECT DISTINCT(MachineID) FROM MachineReports') as $row) { array_push($machineID, $row[0]); } for($i=0;$i<count($machineID);$i++){ foreach($dbh->query("SELECT MachineID FROM MachineReports WHERE MachineID='$machineID[$i]' ORDER BY MachineID") as $row) { //code to associate each machineID with two time stamps goes here } } ? This code just lists out the contents of the table row by row. My ultimate goal is to find the difference in timestamps for a certain MachineID. One of the things I've considered is using a multidimensional array in php - using the $machineID as the key and then storing the timestamp inside the array the key points to. However, I'm uncertain how to do that since my query parses row by row. I have quite a few questions. 1) Is this the most efficient way to be doing this? I suspect my database table design may not be the best. 2)What would be the best way to determine the difference in timestamps for a certain machineID? Even just a pointer to a topic that would prompt me to think about this in a different way would be helpful - I'm not afraid to do research. Thanks!

    Read the article

  • A generic Re-usable C# Property Parser utility

    - by Shyam K Pananghat
    This is about a utility i have happened to write which can parse through the properties of a data contracts at runtime using reflection. The input required is a look like XPath string. since this is using reflection, you dont have to add the reference to any of your data contracts thus making pure generic and re- usable.. you can read about this and get the full c# sourcecode here. Property-Parser-A-C-utility-to-retrieve-values-from-any-Net-Data-contracts-at-runtime Now about the doubts which i have about this utility. i am using this utility enormously i many places of my code I am using Regex repetedly inside a recursion method. does this affect the memmory usage or GC collection badly ?do i have to dispose this manually. if yes how ?. The statements like obj.GetType().GetProperty() and obj.GetType().GetField() returns .net "object" which makes difficult or imposible to introduce generics here. Does this cause to have any overheads like boxing ? on an overall, please suggest to make this utility performance efficient and more light weight on memmory

    Read the article

  • Help optimizing a query with 16 subqueries

    - by Webnet
    I have indexes/primaries on all appropriate ID fields for each type. I'm wondering though how I could make this more efficient. It takes a while to load the page with only 15,000 rows and that'll quickly grow to 500k. The $whereSql variable simply has a few more parameters for the main ebay_archive_listing table. NOTE: This is all done in a single query because I have ASC/DESC sorting for each subquery value. NOTE: I've converted some of the sub queries to INNER JOIN's SELECT product_master.product_id, ( SELECT COUNT(listing_id) FROM ebay_archive_product_listing_assoc '.$listingCountJoin.' WHERE ebay_archive_product_listing_assoc.product_id = product_master.product_id) as listing_count, sku, type_id, ( SELECT AVG(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as average_bid_price, ( SELECT AVG(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.buy_it_now_price > 0 ) as average_buyout_price, ( SELECT MIN(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as lowest_bid_price, ( SELECT MAX(ebay_archive_listing.current_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as highest_bid_price, ( SELECT MIN(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as lowest_buyout_price, ( SELECT MAX(ebay_archive_listing.buy_it_now_price) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.current_price > 0 ) as highest_buyout_price, round((( SELECT COUNT(ebay_archive_listing.id) FROM ebay_archive_listing INNER JOIN ebay_archive_product_listing_assoc ON ( ebay_archive_product_listing_assoc.listing_id = ebay_archive_listing.id AND ebay_archive_product_listing_assoc.product_id = product_master.product_id ) WHERE '.$whereSql.' AND ebay_archive_listing.status_id = 2 ) / ( SELECT COUNT(listing_id) FROM ebay_archive_product_listing_assoc '.$listingCountJoin.' WHERE ebay_archive_product_listing_assoc.product_id = product_master.product_id ) * 100), 1) as sold_percent FROM product_master '.$joinSql.' WHERE product_master.product_id IN ( SELECT product_id FROM ebay_archive_product_listing_assoc INNER JOIN ebay_archive_listing ON ( ebay_archive_listing.id = ebay_archive_product_listing_assoc.listing_id AND '.$whereSql.' ) )

    Read the article

  • DOS "pause" in Linux?

    - by user2930466
    Firstly, I'm REALLY new to programming. I've just started my first programming class two weeks ago, and I apologize if I sound newbish. My professor wants me to implement a "press any key to continue..." thing in my program. Basically when I run a program, he wants one line to come up [like printf("jfdskaljlfja");] then what would come up is "press any key to continue," before the next line runs. he told us that the DOS equivalent is system("pause"), but he wants us to do it linux. This is what my code looks like: #include <stdio.h> int main() { printf("This is the first line of this program); system("pause"); printf("This is the second line); } Except he wants us to do this in Linux, so system("pause") won't work in this case. Is there a way to have it do exactly what pause does, but in linux terms? again, sorry if i sound newbish. thank you so much! Also, he doesn't really care if the code is efficient or anything, as long as it runs. Again, i'm really new to programming, so the simplest answer would be much appreciated :)

    Read the article

  • Playing a sequence of sounds without gaps (iPhone)

    - by Fiire
    I thought maybe the fastest way was to go with Sound Services. It is quite efficient, but I need to play sounds in a sequence, not overlapped. Therefore I used a callback method to check when the sound has finished. This cycle produces around 0.3 seconds in lag. I know this sounds very strict, but it is basically the main axis of the program. EDIT: I now tried using AVAudioPlayer, but I can't play sounds in a sequence without using audioPlayerDidFinishPlaying since that would put me in the same situation as with the callback method of SoundServices. EDIT2: I think that if I could somehow get to join the parts of the sounds I want to play into a large file, I could get the whole audio file to sound continuously. EDIT3: I thought this would work, but the audio overlaps: waitTime = player.deviceCurrentTime; for (int k = 0; k < [colores count]; k++) { player.currentTime = 0; [player playAtTime:waitTime]; waitTime += player.duration; } Thanks

    Read the article

  • chkdsk "An unspecified error occurred (696e647863686b2e e19)"

    - by Ex Umbris
    System is Win7x64 Pro on Core i7-920, 12GB I'm experiencing some system flakiness and am trying to pin down the cause. SMART shows zero bad sectors, zero pending reallocations on all drives Memory tests show no problems. Chkdsk fails in various different ways: When run from a normal command line (no /f option) it gets to 63% and then hangs When run on boot (autocheck) it hangs immediately on starting. Actually, the countdown timer (Press any key to skip chkdsk) gets to 1 second and the system hangs. When run from the F8 "Repair System" option (the Win7 "recovery console"), with /f, it runs to about 63% (end of stage 2) and then fails as follows:   Volume label is OS. CHKDSK is verifying files (stage 1 of 3)... 5068288 file records processed. File verification completed. 308 large file records processed. 0 bad file records processed. 2 EA records processed. 77 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... 63 percent complete. (6078872 of 7562028 index entries processed) An unspecified error occurred (696e647863686b2e e19). Unable to obtain a handle to the event log. Googling and searching on Technet for the error code and "Unable to obtain a handle to the event log" both turn up nothing useful. Anybody have any info on what the problem is?

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • Smart Array P400 - Accelerator Replacement Battery Failure

    - by inflammable
    TL;DR - Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator for a Smart Array P400 controller a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? We have a slightly confusing situation with a Smart Array P400 storage controller with the 512mb battery backed accelerator addon on an HP DL380 server. The storage controller is (afaik) running the latest firmware and driver: Model: Smart Array P400 Controller Status: OK Firmware Version: 7.24 Serial Number: *snip* Rebuild Priority: Medium Expand Priority: Medium Number Of Ports: 2 The storage diagnostic (both on the both boot-up screen for the controller and within the 'Management Homepage' and the 'HP Array Diagnostic Utility') recently starting showing the following status a fault for the battery for the accelerator: Accelerator Status: Temporarily Disabled Error Code: Cache Disabled Low Batteries Serial Number: *snip* Total Memory: 524288 KB Read Cache: 25% Write Cache: 75% Battery Status: Failed Read Errors: 0 Write Errors: 0 We replaced the battery with a new unit (a visual inspection of the P400 card showing nothing unusual) and saw the same fault - but expected this to disappear over the course of a few hours/days as it charged. This didn't happy, and the fault status remains the same as above. Given the battery is a genuine part from HP, I wouldn't have expected a replacement battery to fail straight away, or to be dead-on-arrival (is that naivety on my part?). Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? Is there any diagnostic that could tell me more about the failed battery, without cracking the server open again? Many thanks!

    Read the article

  • Occasional disk I/O errors in SQLite

    - by Alix Axel
    I have a very simple website running PHP and SQLite 3.7.9 (with PDO). After establishing the SQLite connection I immediately execute the following queries: PRAGMA busy_timeout=0; PRAGMA cache_size=8192; PRAGMA foreign_keys=ON; PRAGMA journal_size_limit=67110000; PRAGMA legacy_file_format=OFF; PRAGMA page_size=4096; PRAGMA recursive_triggers=ON; PRAGMA secure_delete=ON; PRAGMA synchronous=NORMAL; PRAGMA temp_store=MEMORY; PRAGMA journal_mode=WAL; PRAGMA wal_autocheckpoint=4096; This website only has one writer and a few occasional readers, so I don't expect any concurrency problems (and I'm even using WAL). Every couple of days, I've seen this error being reported by PHP: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 10 disk I/O error' in ... Stack trace: #0 ...: PDO-exec('PRAGMA cache_si...') There are several things that make this error very weird to me: it's not a transient problem - no matter how many times I refresh the page, it won't go away the database file is not corrupted - the sqlite3 executable can open the database without problems If the following pragmas are commented out, PHP stops throwing the disk I/O exception: PRAGMA cache_size=8192; PRAGMA synchronous=NORMAL; PRAGMA journal_mode=WAL; Then, after successfully reconnecting to the database, I'm able to reintroduce these pragmas and the code with run smoothly for days - until eventually, the same error will occur without any apparent reason. I wasn't able to reproduce this error so far, so I'm clueless about the origin of it. I'm really curious what may be causing this problem... Any ideas? Environment: Ubuntu Server 12.04 LTS PHP 5.4.15 SQLite 3.7.9 Database size: ? 10MiB Transaction (write) size: ? 1KiB EDIT: Might these symptoms have something to do with busy_timeout?

    Read the article

  • Cacti rrdtool graph with no values, NaN in .rrd file

    - by beicha
    Cacti 0.8.7h, with latest RRDTool. I successfully graphed CPU/Interface traffic, but got blank graphs like when it comes to Memory/Temperature monitoring. The problem/bug is actually archived here, however this post didn't help. I can snmpget the value, e.g SNMPv2-SMI::enterprises.9.9.13.1.3.1.3.1 = Gauge32: 26. However, the problem seems to exist in storing these values to the .rrd file. Output of rrdtool info powerbseipv6testrouter_cisco_memfree_40.rrd AVERAGE cisco_memfree as below: filename = "powerbseipv6testrouter_cisco_memfree_40.rrd" rrd_version = "0003" step = 300 last_update = 1321867894 ds[cisco_memfree].type = "GAUGE" ds[cisco_memfree].minimal_heartbeat = 600 ds[cisco_memfree].min = 0.0000000000e+00 ds[cisco_memfree].max = 1.0000000000e+12 ds[cisco_memfree].last_ds = "UNKN" ds[cisco_memfree].value = 0.0000000000e+00 ds[cisco_memfree].unknown_sec = 94 rra[0].cf = "AVERAGE" rra[0].rows = 600 rra[0].pdp_per_row = 1 rra[0].xff = 5.0000000000e-01 rra[0].cdp_prep[0].value = NaN rra[0].cdp_prep[0].unknown_datapoints = 0 rra[1].cf = "AVERAGE" rra[1].rows = 700 rra[1].pdp_per_row = 6 rra[1].xff = 5.0000000000e-01 rra[1].cdp_prep[0].value = NaN rra[1].cdp_prep[0].unknown_datapoints = 0 rra[2].cf = "AVERAGE" rra[2].rows = 775 rra[2].pdp_per_row = 24 rra[2].xff = 5.0000000000e-01 rra[2].cdp_prep[0].value = NaN rra[2].cdp_prep[0].unknown_datapoints = 18 rra[3].cf = "AVERAGE" rra[3].rows = 797 rra[3].pdp_per_row = 288 rra[3].xff = 5.0000000000e-01 rra[3].cdp_prep[0].value = NaN rra[3].cdp_prep[0].unknown_datapoints = 114 rra[4].cf = "MAX" rra[4].rows = 600 rra[4].pdp_per_row = 1 rra[4].xff = 5.0000000000e-01 rra[4].cdp_prep[0].value = NaN rra[4].cdp_prep[0].unknown_datapoints = 0 rra[5].cf = "MAX" rra[5].rows = 700 rra[5].pdp_per_row = 6 rra[5].xff = 5.0000000000e-01 rra[5].cdp_prep[0].value = NaN rra[5].cdp_prep[0].unknown_datapoints = 0 rra[6].cf = "MAX" rra[6].rows = 775 rra[6].pdp_per_row = 24 rra[6].xff = 5.0000000000e-01 rra[6].cdp_prep[0].value = NaN rra[6].cdp_prep[0].unknown_datapoints = 18 rra[7].cf = "MAX" rra[7].rows = 797 rra[7].pdp_per_row = 288 rra[7].xff = 5.0000000000e-01 rra[7].cdp_prep[0].value = NaN rra[7].cdp_prep[0].unknown_datapoints = 114

    Read the article

  • uWSGI log file...permission denied to read file

    - by bkev
    I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so: Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334) Error opening file for reading: Permission denied Problem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. My Upstart conf file description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI \ --master \ --emperor /etc/uwsgi/vassals \ --die-on-term \ --auto-procname \ --no-orphans \ --logto $LOGTO \ --logdate My Vassal ini file: [uwsgi] # Variables base = /srv/env/mysiteenv # Generic Config uid = uwsgi gid = uwsgi socket = 127.0.0.1:5050 master = true processes = 2 reload-on-as = 128 harakiri = 60 harakiri-verbose = true auto-procname = true plugins = http,python cache = 2000 home = %(base) pythonpath = %(base)/mysite module = wsgi logto = /srv/log/mysite/uwsgi_error.log logdate = true

    Read the article

  • Why does Mass Effect 1 run so slow on my machine if I have an XFX NVidia 9400GT video card? [closed]

    - by Papuccino1
    I so sick and tired of having my components pass the minimum requirements of a game and then I get 15 FPS on the game on everything low. Should't PC developers say 'use at least this video card for a smooth 30 FPS'? Here are my specs: Windows 7 2GB DDR2 RAM XFX Nvidia 9400gt Intel Pentium D Dual Core 2.8ghz I should be at LEAST getting 30 FPS on everything low right? Please tell me what I can do to make games run as they should, or is my video card not good for these games? Here are the recommended requirements from the official site: Recommended System Requirements for Mass Effect on the PC Operating System: Windows XP or Vista Processor: 2.6+GHZ Intel or 2.4+GHZ AMD Memory: 2 Gigabyte Ram Video Card: NVIDIA GeForce 7900 GTX or higher. ATI X1800 XL series or higher Hard Drive Space: 12 Gigabytes Sound Card: DirectX 9.0c compatible sound card and drivers – 5.1 sound card recommended My videocard is 9400GT, how is that worse than a 7900GTX? :S Edit 2: I should note, that I get poor frames when running the game in absolute BOTTOM specs. lowest resolution, no particles, etc. etc. Absolute ZERO and getting poor framerates.

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • OpenSolaris / Nexenta problems with NetXen 4-port NIC card (ntxn driver)

    - by ewwhite
    Hello, I'm running NexentaStor Enterprise on an HP ProLiant DL180 G6 server. The onboard NIC interfaces surface as igb0 and igb1 and work well. However, I've added an HP NC375T 4-port network card using the NetXen 3031 chipset. This card should be handled by the ntxn driver in the SUNWntxn package, but that results in "ntxn0: failed to map doorbell" messages upon boot. The network interfaces don't show up. After some research, I found HP's driver package for the card. The release notes for the driver package state: This version of the Driver is supported only on Oracle Solaris 10 5/09 & 10/09. Oracle Solaris 10 5/09 & 10/09 contain an older version of NetXen P3 driver package called SUNWntxn. So, adding another version of NetXen P3 driver package using pkgadd command might result in conflicts with the NetXen driver binary & related files. Users are advised to uninstall native SUNWntxn driver package before installing the new package. The install completes, but I end up with a different set of errors in initializing the card. ifconfig ntxn0 plumb ifconfig: cannot open link "ntxn0": DLPI link does not exist dmesg output: Jan 29 07:20:17 ch-san2 ntxn: [ID 977263 kern.warning] WARNING: Memory not available Jan 29 07:20:17 ch-san2 ntxn: [ID 404858 kern.notice] NOTICE: ntxn0: Mac registration error Trying to manually create the device files: root@ch-san2:/volumes# add_drv -i "4040,100" ntxn ("ntxn") already in use as a driver or alias. Update the driver: root@ch-san2:/volumes# update_drv -f ntxn devfsadm: driver failed to attach: ntxn Warning: Driver (ntxn) successfully added to system but failed to attach Any ideas on how to get this driver working, or should I ditch the card and go with an Intel or something else?

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • fedora12, yum not releasing "lock" after performing an action

    - by James.Elsey
    Hello, This problem has been occurring quite frequently recently and I can't seem to find a way of preventing it. Whenever I perform an action with yum such as to install or remove software, it appears to execute successfully but then I'm unable to move onto the next yum command For example, I executed yum remove skype, it appeared to remove ok, but next when I try to yum search skype it appears that yum is still processing, and I have to manually kill that process via kill 1234 (or whatever the PID is) My output is as follows [root@nevada james]# yum remove skype Loaded plugins: presto, refresh-packagekit Setting up Remove Process Resolving Dependencies --> Running transaction check ---> Package skype.i586 0:2.1.0.47-fc10 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Removing: skype i586 2.1.0.47-fc10 installed 24 M Transaction Summary ================================================================================ Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Erasing : skype-2.1.0.47-fc10.i586 1/1 Removed: skype.i586 0:2.1.0.47-fc10 Complete! [root@nevada james]# yum search skype Loaded plugins: presto, refresh-packagekit Existing lock /var/run/yum.pid: another copy is running as pid 3639. Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 79 M RSS (372 MB VSZ) Started: Fri Dec 18 08:39:18 2009 - 00:01 ago State : Sleeping, pid: 3639 Kernel version : 2.6.31.6-166.fc12.x86_64 Any ideas how I can prevent this behaviour? Thanks

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >