Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 596/672 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

  • Several Objective-C objects become Invalid for no reason, sometimes.

    - by farnsworth
    - (void)loadLocations { NSString *url = @"<URL to a text file>"; NSStringEncoding enc = NSUTF8StringEncoding; NSString *locationString = [[NSString alloc] initWithContentsOfURL:[NSURL URLWithString:url] usedEncoding:&enc error:nil]; NSArray *lines = [locationString componentsSeparatedByString:@"\n"]; for (int i=0; i<[lines count]; i++) { NSString *line = [lines objectAtIndex:i]; NSArray *components = [line componentsSeparatedByString:@", "]; Restaurant *res = [byID objectForKey:[components objectAtIndex:0]]; if (res) { NSString *resAddress = [components objectAtIndex:3]; NSArray *loc = [NSArray arrayWithObjects:[components objectAtIndex:1], [components objectAtIndex:2]]; [res.locationCoords setObject:loc forKey:resAddress]; } else { NSLog([[components objectAtIndex:0] stringByAppendingString:@" res id not found."]); } } } There are a few weird things happening here. First, at the two lines where the NSArray lines is used, this message is printed to the console- *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary count]: method sent to an uninitialized mutable dictionary object' which is strange since lines is definitely not an NSMutableDictionary, definitely is initialized, and because the app doesn't crash. Also, at random points in the loop, all of the variables that the debugger can see will become Invalid. Local variables, property variables, everything. Then after a couple lines they will go back to their original values. setObject:forKey never has an effect on res.locationCoords, which is an NSMutableDictionary. I'm sure that res, res.locationCoords, and byId are initialized. I also tried adding a retain or copy to lines, same thing. I'm sure there's a basic memory management principle I'm missing here but I'm at a loss.

    Read the article

  • Please critique this method

    - by Jakob
    Hi I've been looking around the net for some tab button close functionality, but all those solutions had some complicated eventhandler, and i wanted to try and keep it simple, but I might have broken good code ethics doing so, so please review this method and tell me what is wrong. public void AddCloseItem(string header, object content){ //Create tabitem with header and content StackPanel headerPanel = new StackPanel() { Orientation = Orientation.Horizontal, Height = 14}; headerPanel.Children.Add(new TextBlock() { Text = header }); Button closeBtn = new Button() { Content = new Image() { Source = new BitmapImage(new Uri("images/cross.png", UriKind.Relative)) }, Margin = new Thickness() { Left = 10 } }; headerPanel.Children.Add(closeBtn); TabItem newTabItem = new TabItem() { Header = headerPanel, Content = content }; //Add close button functionality closeBtn.Tag = newTabItem; closeBtn.Click += new RoutedEventHandler(closeBtn_Click); //Add item to list this.Add(newTabItem); } void closeBtn_Click(object sender, RoutedEventArgs e) { this.Remove((TabItem)((Button)sender).Tag); } So what I'm doing is storing the tabitem in the btn.Tag property, and then when the button is clicked i just remove the tabitem from my observablecollection, and the UI is updated appropriately. Am I using too much memory saving the tabitem to the Tag property?

    Read the article

  • PHP in Wordpress Posts - Is this okay?

    - by Thomas
    I've been working with some long lists of information and I've come up with a good way to post it in various formats on my wordpress blog posts. I installed the exec-PHP plugin, which allows you to run php in posts. I then created a new table (NEWTABLE) in my wordpress database and filled that table with names, scores, and other stuff. I was then able to use some pretty simple code to display the information in a wordpress post. Below is an example, but you could really do whatever you wanted. My question is - is there a problem with doing this? with security? or memory? I could just type out all the information in each post, but this is really much nicer. Any thoughts are appreciated. <?php $theResult = mysql_query("SELECT * FROM NEWTABLE WHERE Score < 100 ORDER BY LastName"); while($row = mysql_fetch_array($theResult)) { echo $row['FirstName']; echo " " . $row['LastName']; echo " " . $row['Score']; echo "<br />"; } ?>

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Loading Jscript files into Firefox extension

    - by colon3l
    Hi ! Let's get directly to the problem : I'm actually doing a firefox extension in which i would like to implement the jWebsocket API in order to build a small chat. I got my main script file, named test.js, and the jWebsocket lib into a js folder. Just for you to know, this is my first firefox extension ever. So in my XUL file I got this (for the script part only of course, the interface code is not shown) : <overlay id="test-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://test/content/test.js" /> <script type="application/x-javascript" src="chrome://test/content/js/jwebsocket.js" /> jwebsocket.js being the file I need to call according to jWebsocket website. In my main script file test.js I start with : if (jws.browserSupportsWebSockets()) { jWebSocketClient = new jws.jWebSocketJSONClient(); } else { var lMsg = jws.MSG_WS_NOT_SUPPORTED; alert(lMsg); } jws being the namespace created into the jwebsocket.js file. Of course I've got the required stand-alone server running in background, and working. So from what I understood looking on various websites, is that if a js file is loaded into the javascript allocated memory space (with the tag), all namespace/function should be available between each file. But this was mostly for HTML-oriented issues, so I'm not sure if it applies to XUL/Firefox environment. But the script keep failing at the first jws call. Any ideas on what goes wrong here ? I'm stuck for 2 days now :/

    Read the article

  • Trouble with Unions in C program.

    - by Jordan S
    I am working on a C program that uses a Union. The union definition is in FILE_A header file and looks like this... // FILE_A.h**************************************************** xdata union { long position; char bytes[4]; }CurrentPosition; If I set the value of CurrentPosition.position in FILE_A.c and then call a function in FILE_B.c that uses the union, the data in the union is back to Zero. This is demonstrated below. // FILE_A.c**************************************************** int main.c(void) { CurrentPosition.position = 12345; SomeFunctionInFileB(); } // FILE_B.c**************************************************** void SomeFunctionInFileB(void) { // After the following lines execute I see all zeros in the flash memory. WriteByteToFlash(CurrentPosition.bytes[0]; WriteByteToFlash(CurrentPosition.bytes[1]; WriteByteToFlash(CurrentPosition.bytes[2]; WriteByteToFlash(CurrentPosition.bytes[3]; } Now, If I pass a long to SomeFunctionInFileB(long temp) and then store it into CurrentPosition.bytes within that function, and finally call WriteBytesToFlash(CurrentPosition.bytes[n]... it works just fine. It appears as though the CurrentPosition Union is not global. So I tried changing the union definition in the header file to include the extern keyword like this... extern xdata union { long position; char bytes[4]; }CurrentPosition; and then putting this in the source (.c) file... xdata union { long position; char bytes[4]; }CurrentPosition; but this causes a compile error that says: C:\SiLabs\Optec Programs\AgosRot\MotionControl.c:76: error 91: extern definition for 'CurrentPosition' mismatches with declaration. C:\SiLabs\Optec Programs\AgosRot\/MotionControl.h:48: error 177: previously defined here So what am I doing wrong? How do I make the union global?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • When do instance variables get initialized and values assigned?

    - by AKh
    When doees the instance variable get initialized? Is it after the constructor block is done or before it? Consider this example: public abstract class Parent { public Parent(){ System.out.println("Parent Constructor"); init(); } public void init(){ System.out.println("parent Init()"); } } public class Child extends Parent { private Integer attribute1; private Integer attribute2 = null; public Child(){ super(); System.out.println("Child Constructor"); } public void init(){ System.out.println("Child init()"); super.init(); attribute1 = new Integer(100); attribute2 = new Integer(200); } public void print(){ System.out.println("attribute 1 : " +attribute1); System.out.println("attribute 2 : " +attribute2); } } public class Tester { public static void main(String[] args) { Parent c = new Child(); ((Child)c).print(); } } OUTPUT: Parent Constructor Child init() parent Init() Child Constructor attribute 1 : 100 attribute 2 : null When the memory for the atribute 1 & 2 are allocated in the heap ? Curious to know why is attribute 2 is NULL ? Are there any design flaws?

    Read the article

  • SIGABRT on iPhone when changing xib

    - by Boz
    I've just finished off an app for the iPhone which, until today, ran fine on the iPhone simulator and actual devices. I tried changing the xib which is loaded in the applicationDidFinishLaunching method in my application delegate class - all I did was change the string in initWithNibName. When I launch the app on the simulator, the Default.png image is shown, then the app crashes with an uncaught exception. When running on a device, the Default.png image is shown for about 10 seconds, the UI is never loaded and I get 'GDB: Program received signal: "SIGABRT".' on the Xcode status bar. Debugging shows that applicationDidFinishLaunching is never actually reached before the app crashes. Setting the starting xib back to the original solves the issue, but now I've made a change and saved it in the Interface Builder and the app shows the same issues as above - I've made no code changes at all. Is this a memory issue, or a known issue of a common mistake? NOTE: I've made no code changes whatsoever, and the only changes I've made to the xib are cosmetic, the IBOutlets are all intact.

    Read the article

  • Why is my UITableView not being set?

    - by Jamie L
    I checked using the debbuger in the viewDidLoad method and tracerTableView is 0x0 which i assume means it is nil. I don't understand. I should go ahaed say yes I have already checked my nib file and yes all the connections are correct. Here is the header file and the begging of the .m. ///////////// .h file //////////// @interface TrackerListController : UITableViewController { // The mutable (modifiable) dictionary days holds all the data for the days tab NSMutableArray *trackerList; UITableView *tracerTableView; } @property (nonatomic, retain) NSMutableArray *trackerList; @property (nonatomic, retain) IBOutlet UITableView. *tracerTableView; //The addPackage: method is invoked when the user taps the addbutton created at runtime. -(void) addPackage : (id) sender; @end /////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////// .m file ////////////////////////////////////// @implementation TrackerListController @synthesize trackerList, tracerTableView; (void)viewDidLoad { [super viewDidLoad]; self.title = @"Package Tracker"; self.navigationItem.leftBarButtonItem = self.editButtonItem; UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addPackage:)]; // Set up the Add custom button on the right of the navigation bar self.navigationItem.rightBarButtonItem = addButton; [addButton release]; // Release the addButton from memory since it is no longer needed }

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Is there a difference between starting an application from the OS or from adb

    - by aruwen
    I do have a curious error in my application. My app crashes (don't mind the crash, I roughly know why - classloader) when I start the application from the OS directly, then kill it from the background via any Task Killer (this is one of the few ways to reproduce the crash consistently - simulating the OS freeing memory and closing the application) and try to restart it again. The thing is, if I start the application via adb shell using the following command: adb shell am start -a android.intent.action.MAIN -n com.my.packagename/myLaunchActivity I cannot reproduce the crash. So is there any difference in how Android OS calls the application as opposed to the above call? EDIT: added the manifest (just changed names) <?xml version="1.0" ?> <manifest android:versionCode="5" android:versionName="1.05" package="com.my.sample" xmlns:android="http://schemas.android.com/apk/res/android"> <uses-sdk android:minSdkVersion="7"/> <application android:icon="@drawable/square_my_logo" android:label="@string/app_name"> <activity android:label="@string/app_name" android:name="com.my.InfoActivity" android:screenOrientation="landscape"></activity> <activity android:label="@string/app_name" android:name="com.my2.KickStart" android:screenOrientation="landscape"/> <activity android:label="@string/app_name" android:name="com.my2.Launcher" android:screenOrientation="landscape"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/></manifest> starting the com.my2.Launcher from the adb shell

    Read the article

  • C# creating a Class, having objects as member variables? I think the objects are garbage collecte

    - by Bryan
    So I have a class that has the following member variables. I have get and set functions for every piece of data in this class. public class NavigationMesh { public Vector3 node; int weight; bool isWall; bool hasTreasure; public NavigationMesh(int x, int y, int z, bool setWall, bool setTreasure) { //default constructor //Console.WriteLine(x + " " + y + " " + z); node = new Vector3(x, y, z); //Console.WriteLine(node.X + " " + node.Y + " " + node.Z); isWall = setWall; hasTreasure = setTreasure; weight = 1; }// end constructor public float getX() { Console.WriteLine(node.X); return node.X; } public float getY() { Console.WriteLine(node.Y); return node.Y; } public float getZ() { Console.WriteLine(node.Z); return node.Z; } public bool getWall() { return isWall; } public void setWall(bool item) { isWall = item; } public bool getTreasure() { return hasTreasure; } public void setTreasure(bool item) { hasTreasure = item; } public int getWeight() { return weight; } }// end class In another class, I have a 2-Dim array that looks like this NavigationMesh[,] mesh; mesh = new NavigationMesh[502,502]; I use a double for loop to assign this, my problem is I cannot get the data I need out of the Vector3 node object after I create this object in my array with my "getters". I've tried making the Vector3 a static variable, however I think it refers to the last instance of the object. How do I keep all of these object in memory? I think there being garbage collected. Any thoughts?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Very slow guards in my monadic random implementation (haskell)

    - by danpriduha
    Hi! I was tried to write one random number generator implementation, based on number class. I also add there Monad and MonadPlus instance. What mean "MonadPlus" and why I add this instance? Because of I want to use guards like here: -- test.hs -- import RandomMonad import Control.Monad import System.Random x = Rand (randomR (1 ::Integer, 3)) ::Rand StdGen Integer y = do a <-x guard (a /=2) guard (a /=1) return a here comes RandomMonad.hs file contents: -- RandomMonad.hs -- module RandomMonad where import Control.Monad import System.Random import Data.List data RandomGen g => Rand g a = Rand (g ->(a,g)) | RandZero instance (Show g, RandomGen g) => Monad (Rand g) where return x = Rand (\g ->(x,g)) (RandZero)>>= _ = RandZero (Rand argTransformer)>>=(parametricRandom) = Rand funTransformer where funTransformer g | isZero x = funTransformer g1 | otherwise = (getRandom x g1,getGen x g1) where x = parametricRandom val (val,g1) = argTransformer g isZero RandZero = True isZero _ = False instance (Show g, RandomGen g) => MonadPlus (Rand g) where mzero = RandZero RandZero `mplus` x = x x `mplus` RandZero = x x `mplus` y = x getRandom :: RandomGen g => Rand g a ->g ->a getRandom (Rand f) g = (fst (f g)) getGen :: RandomGen g => Rand g a ->g -> g getGen (Rand f) g = snd (f g) when I run ghci interpreter, and give following command getRandom y (mkStdGen 2000000000) I can see memory overflow on my computer (1G). It's not expected, and if I delete one guard, it works very fast. Why in this case it works too slow? What I do wrong?

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Can I get rid of this read lock?

    - by Pieter
    I have the following helper class (simplified): public static class Cache { private static readonly object _syncRoot = new object(); private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>(); public static void Add(Type type, string value) { lock (_syncRoot) { _lookup.Add(type, value); } } public static string Lookup(Type type) { string result; lock (_syncRoot) { _lookup.TryGetValue(type, out result); } return result; } } Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock. How do you normally get rid of the read lock in this situation? I have the following ideas: Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation; Move to COW semantics. public static void Add(Type type, string value) { lock (_syncRoot) { var lookup = new Dictionary<Type, string>(_lookup); lookup.Add(type, value); _lookup = lookup; } } (With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.) Using ReaderWriterLock. I believe this would make things worse since the region being locked is small. Suggestions are very welcome.

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • Access class instance "name" dynamically in Python

    - by user328317
    In plain english: I am creating class instances dynamically in a for loop, the class then defines a few attributes for the instance. I need to later be able to look up those values in another for loop. Sample code: class A: def init(self, name, attr): self.name=name self.attr=attr names=("a1", "a2", "a3") x=10 for name in names: name=A(name, x) x += 1 ... ... ... for name in names: print name.attr How can I create an identifier for these instances so they can be accessed later on by "name"? I've figured a way to get this by associating "name" with the memory location: class A: instances=[] names=[] def init(self, name, attr): self.name=name self.attr=attr A.instances.append(self) A.names.append(name) names=("a1", "a2", "a3") x=10 for name in names: name=A(name, x) x += 1 ... ... ... for name in names: index=A.names.index(name) print "name: " + name print "att: " + str(A.instances[index].att) This has had me scouring the web for 2 days now, and I have not been able to find an answer. Maybe I don't know how to ask the question properly, or maybe it can't be done (as many other posts seemed to be suggesting). Now this 2nd example works, and for now I will use it. I'm just thinking there has to be an easier way than creating your own makeshift dictionary of index numbers and I'm hoping I didn't waste 2 days looking for an answer that doesn't exist. Anyone have anything? Thanks in advance, Andy Update: A coworker just showed me what he thinks is the simplest way and that is to make an actual dictionary of class instances using the instance "name" as the key.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Implementing java FixedTreadPool status listener

    - by InsertNickHere
    Hi there, it's about an application which is supposed to process (VAD, Loudness, Clipping) a lot of soundfiles (e.g. 100k). At this time, I create as many worker threads (callables) as I can put into memory, and then run all with a threadPool.invokeAll(), write results to file system, unload processed files and continue at step 1. Due to the fact it's an app with a GUI, i don't want to user to feel like the app "is not responding" while processing all soundfiles. (which it does at this time cause invokeAll is blocking). Im not sure what is a "good" way to fix this. It shall not be possible for the user to do other things while processing, but I'd like to show a progress bar like "10 of 100000 soundfiles are done". So how do I get there? Do I have to create a "watcher thread", so that every worker hold a callback on it? I'm quite new to multi threading, and don't get the idea of such a mechanisem.. If you need to know: I'm using SWT/JFace. Regards, InsertNickHere

    Read the article

  • PHP GD imagecreatefromstring discards transparency

    - by meustrus
    I've been trying to get transparency to work with my application (which dynamically resizes images before storing them) and I think I've finally narrowed down the problem after much misdirection about imagealphablending and imagesavealpha. The source image is never loaded with proper transparency! // With this line, the output image has no transparency (where it should be // transparent, colors bleed out randomly or it's completely black, depending // on the image) $img = imagecreatefromstring($fileData); // With this line, it works as expected. $img = imagecreatefrompng($fileName); // Blah blah blah, lots of image resize code into $img2 goes here; I finally // tried just outputting $img instead. header('Content-Type: image/png'); imagealphablending($img, FALSE); imagesavealpha($img, TRUE); imagepng($img); imagedestroy($img); It would be some serious architectural difficulty to load the image from a file; this code is being used with a JSON API that gets queried from an iPhone app, and it's easier in this case (and more consistent) to upload images as base64-encoded strings in the POST data. Do I absolutely need to somehow store the image as a file (just so that PHP can load it into memory again)? Is there maybe a way to create a Stream from $fileData that can be passed to imagecreatefrompng?

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >