Search Results

Search found 17550 results on 702 pages for 'real world'.

Page 655/702 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • OpenGL ES Polygon with Normals rendering (Note the 'ES!')

    - by MarqueIV
    Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL? I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good! The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory. In a perfect world I'd like to have my vertex and normal arrays have different lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex. Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me. Anyone help? I hope I'm missing something very simple here. Mark

    Read the article

  • BlackBerry OS 7.1 secured TLS connection is closed after very short time

    - by MrVincenzo
    To make a long story short: Same client-server configuration, same network topology, same device (Bold 9900) - works perfectly well on OS 7.0 but doesn't work as expected on OS 7.1 and the secured tls connection is being closed by the server after a very short time. My application opens a secured tls connection to a server. The connection is kept alive by a application layer keep-alive mechanism and remains open until the client closes it. Attached is a simplified version of the actual code that opens connection and reads from the socket. The code works perfectly on OS 5.0-7.0 but doesn't work as expected on OS 7.1. When running on OS 7.1, the blocking read() returns with -1 (end of the stream has been reached) after very short time (10-45 seconds). For OS 5.0-7.0 the call to read() remains blocking until next data arrives and the connection is never closed by the server. Connection connection = Connector.open(connectionString); connInputStream = connection.openInputStream(); while (true) { try { retVal = connInputStream.read(); if (-1 == retVal) { break; // end of stream has been reached } } catch (Exception e ) { // do error handling } // data read from stream is handled here } UPDATE 1: Apparently, the problem appears only when I use secured tls connection (either using mobile network or WiFi) on OS 7.1. Everything works as expected when opening a non secured connection on OS 7.1. For tls on mobile networks I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=false;ConnectionType=mds-public;EndToEndDesired"; For tls on Wifi I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=true;interface=wifi;EndToEndRequired" UPDATE 2: The connection is never idle. I am constantly receiving and sending data on it. The issue appears both when using mobile connection and WiFi. The issue appears both on real OS 7.1 devices and simulators. I am starting to suspect that it is somehow related either to the connection string I am using or to the tls handshake. UPDATE 3: According to Wireshark's captures that I made with the OS 7.1 simulator, the secured tls connection is being closed by the server (client receives FIN). For the moment I don't have the server's private key therefore I unable to debug the tls handshake.

    Read the article

  • Issues accessing an object's array values - returns null or 0s

    - by PhatNinja
    The function below should return an array of objects with this structure: TopicFrequency = { name: "Chemistry", //This is dependent on topic data: [1,2,3,4,5,6,7,8,9,10,11,12] //This would be real data }; so when I do this: myData = this.getChartData("line"); it should return two objects: {name : "Chemistry", data : [1,2,3,4,51,12,0,0, 2,1,41, 31]} {name : "Math", data : [0,0,41,4,51,12,0,0, 2,1,41, 90]} so when I do console.log(myData); it's perfect, returns exactly this. However when I do console.log(myData[0].data) it returns all 0s, not the values. I'm not sure what this issues is known as, and my question is simple what is this problem known as? Here is the full function. Somethings were hardcoded and other variables (notable server and queryContent) removed. Those parts worked fine, it is only when manipulated/retreiving the returned array's values that I run into problems. Note this is async. so not sure if that is also part of the problem. getChartData: function (chartType) { var TopicsFrequencyArray = new Array(); timePairs = this.newIntervalSet("Month"); topicList = new Array("Chemistry", "Math");//Hard coded for now var queryCopy = { //sensitive information }; for (i = 0; i < topicList.length; i++) { var TopicFrequency = { name: null, data: this.newFilledArray(12, 0) }; j = 0; TopicFrequency.name = topicList[i]; while (j < timePairs.length) { queryCopy.filter = TopicFrequency.name; //additional queryCopy parameter changes made here var request = esri.request({ url: server, content: queryCopy, handleAs: "json", load: sucess, error: fail }); j = j + 1; function sucess(response, io) { var topicCountData = 0; query = esri.urlToObject(io.url); var dateString = query.query.fromDate.replace("%", " "); dateString = dateString.replace(/-/g, "/"); dateString = dateString.split("."); date = new Date(dateString[0]); dojo.forEach(response.features, function (feature) { if (feature.properties.count > 0) { topicCountData = feature.properties.count; } TopicFrequency.data[date.getMonth()] = topicCountData; }); } function fail(error) { j = j + 1; alert("There was an unspecified error with this request"); console.log(error); } } TopicsFrequencyArray.push(TopicFrequency); } },

    Read the article

  • UITableViewCell separator line disappears on scroll

    - by iconso
    I'm trying to have a separator cell with a custom image. I did try something like that: In my cellForRowAtIndexPath: NSString *cellIdentifier = [NSString stringWithFormat:@"identifier"]; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellIdentifier]; } cell.textLabel.font = [UIFont fontWithName:@"Helvetica" size:19]; cell.textLabel.text = [self.menuItems objectAtIndex:indexPath.row]; cell.textLabel.textColor = [UIColor colorWithRed:128/255.0f green:129/255.0f blue:132/255.0f alpha:1.0f]; cell.backgroundColor = [UIColor whiteColor]; UIImageView *imagView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"reaL.png"]]; imagView.frame = CGRectMake(0, cellHeight, cellWidth, 1); [cell.contentView addSubview:imagView]; switch (indexPath.row) { case 0: cell.imageView.image = [self imageWithImage:[UIImage imageNamed:@"img1.png"] scaledToSize:CGSizeMake(27, 27)]; cell.imageView.highlightedImage = [self imageWithImage:[UIImage imageNamed:@"route.png"] scaledToSize:CGSizeMake(27, 27)]; break; case 1: cell.imageView.image = [self imageWithImage:[UIImage imageNamed:@"img.png"] scaledToSize:CGSizeMake(27, 27)]; cell.imageView.highlightedImage = [self imageWithImage:[UIImage imageNamed:@"money.png"] scaledToSize:CGSizeMake(27, 27)]; break; case 2: cell.imageView.image = [self imageWithImage:[UIImage imageNamed:@"auto.png"] scaledToSize:CGSizeMake(27, 27)]; cell.imageView.highlightedImage = [self imageWithImage:[UIImage imageNamed:@"cars.png"] scaledToSize:CGSizeMake(27, 27)]; break; case 3: cell.imageView.image = [self imageWithImage:[UIImage imageNamed:@"impostazioni.png"] scaledToSize:CGSizeMake(27, 27)]; cell.imageView.highlightedImage = [self imageWithImage:[UIImage imageNamed:@"impostazioni.png"] scaledToSize:CGSizeMake(27, 27)]; // cell.imageView.contentMode = UIViewContentModeScaleAspectFill; break; case 4: cell.imageView.image = [self imageWithImage:[UIImage imageNamed:@"info.png"] scaledToSize:CGSizeMake(27, 27)]; cell.imageView.highlightedImage = [self imageWithImage:[UIImage imageNamed:@"info.png"] scaledToSize:CGSizeMake(27, 27)]; break; default: break; } return cell; When I lunch the app everything is good, but when I scroll the the table, or when I select a cell the separator lines disappear. How I can have a permanent custom line separator?

    Read the article

  • How to replace an object in an NSMutableArray at a given index with a new object

    - by shakeelw
    Hi guys. I have an NSMutableArray object(retained, synthesized as all) that is initiated just fine and I can easily add objects to it using the 'addObject:' method. But if I want to replace an object at a certain index with a new one in that NSMutableArray, it doesn't work. For example: ClassA.h @interface ClassA : NSObject { NSMutableArray *list; } @property (nonatomic, copy, readwrite) NSMutableArray *list; end ClassA.m import "ClassA.h" @implementation ClassA @synthesize list; (id)init { [super init]; NSMutableArray *localList = [[NSMutableArray alloc] init]; self.list = localList; [localList release]; //Add initial data [list addObject:@"Hello "]; [list addObject:@"World"]; } // Custom set accessor to ensure the new list is mutable (void)setList:(NSMutableArray *)newList { if (list != newList) { [list release]; list = [newList mutableCopy]; } } -(void)updateTitle:(NSString *)newTitle:(NSString *)theIndex { int i = [theIndex intValue]-1; [self.list replaceObjectAtIndex:i withObject:newTitle]; NSLog((NSString *)[self.list objectAtIndex:i]); // gives the correct output } However, the change remains true only inside the method. from any other method, the NSLog((NSString *)[self.list objectAtIndex:i]); gives the same old value. How can I actually get the old object replaced with the new one at a specific index so that the change can be noticed from within any other method as well. I even modified the method like this, but the result is the same: -(void)updateTitle:(NSString *)newTitle:(NSString *)theIndex { int i = [theIndex intValue]-1; NSMutableArray *localList = [[NSMutableArray alloc] init]; localList = [localList mutableCopy]; for(int j = 0; j < [list count]; j++) { if(j == i) { [localList addObject:newTitle]; NSLog(@"j == 1"); NSLog([NSString stringWithFormat:@"%d", j]); } else { [localList addObject:(NSString *)[self.list objectAtIndex:j]]; } } [self.list release]; //self.list = [localList mutableCopy]; [self setList:localList]; [localList release]; } Please help out guys :)

    Read the article

  • Store Observer not being called always

    - by Nixarn
    Has anyone else here experienced problems with their Store Observer class not being called always when the user for instance cancels a request (or purchases something) We just had our update that brought in app purchases go live last night, and before that we had obviously tested everything tons of times against the Sandbox and everything was working fine. Now however, when the update went live in a real environment we keep getting issues with the store. For instance, in a freshly booted iPhone / iPod, the first time you run the app, if you then try to make a purchase and then immediately cancel it from the first dialog, it seems as if the callback for the cancel is not getting called. If you then restart the app it seems as if it always works after that, or at least. Same thing with other callbacks, seems as if our store observer isn't listening as the callbacks aren't being registered on the phone. One example of this is if you purchase something, then nothing will happen (if this is the first time the app is launched at least). You get the purchase successful dialog from the app store but it seems as if our own code isn't called. If you then quit the app and restart it the callback gets called. Same problem happens if you for instance try to start a request to download all previous purchases and then immediately cancel it as the first dialog pops up, if you do that then the callback for a failed restore is not called, until you then restart the app and try it again, then it always seems to work. The way we have implemented our store observer is by creating a custom class that's implements the SKPaymentTransactionObserver interface. @interface StoreObserver : NSObject<SKPaymentTransactionObserver> In the class we have implemented the following methods: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions - (void)paymentQueueRestoreCompletedTransactionsFinished:(SKPaymentQueue *)queue - (void)paymentQueue:(SKPaymentQueue *)queue restoreCompletedTransactionsFailedWithError:(NSError *)error The way our restore process works is that if you tap on the button that allows you to download all we simply run the restoreCompletedTransactions code as follows: [[SKPaymentQueue defaultQueue] restoreCompletedTransactions]; However, the callback, restoreCompletedTransactionsFailedWithError, which has been implemented in the store observer, does not always get called when we try to cancel the request. This happens when you boot the iPhone / iPod and try this for the first time. If you after that restart the app everything works fine. The StoreObserver class is created when our app is launched, just by running the following code: pStoreObserver = [[StoreObserver alloc] init]; [[SKPaymentQueue defaultQueue] addTransactionObserver:pStoreObserver]; Has anyone else had any similar experiences? Or does anyone have any suggestions on how to solve this? As I said, in the sandbox environment everything was working fine, no issues whatsoever, but now once it's gone live we're experiencing these.

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

  • How to reconcile my support of open-source software and need to feed and house myself?

    - by Guzba
    I have a bit of a dilemma and wanted to get some other developers' opinions on it and maybe some guidance. So I have created a 2D game for Android from the ground up, learning and re factoring as I went along. I did it for the experience, and have been proud of the results. I released it for free as ad supported with AdMob not really expecting much out of it, but curious to see what would happen. Its been a few of months now since release, and it has become very popular (250k downloads!). Additionally, the ad revenue is great and is driving me to make more good games and even allowing me to work less so that I can focus on my own works. When I originally began working on the game, I was pretty new to concurrency and completely new to Android (had Java experience though). The standard advice I got for starting an Android game was to look at the sample games from Google (Snake, Lunar Lander, ...) so I did. In my opinion, these Android sample games from Google are decent to see in general what your code should look like, but not actually all that great to follow. This is because some of their features don't work (saving game state), the concurrency is both unexplained and cumbersome (there is no real separation between the game thread and the UI thread since they sync lock each other out all the time and the UI thread runs game thread code). This made it difficult for me as a newbie to concurrency to understand how it was organized and what was really running what code. Here is my dilemma: After spending this past few months slowly improving my code, I feel that it could be very beneficial to developers who are in the same position that I was in when I started. (Since it is not a complex game, but clearly coded in my opinion.) I want to open up the source so that others can learn from it but don't want to lose my ad revenue stream, which, if I did open the source, I fear I would when people released versions with the ad stripped, or minor tweaks that would fragment my audience, etc. I am a CS undergrad major in college and this money is giving me the freedom to work less at summer job, thus giving me the time and will to work on more of my own projects and improving my own skills while still paying the bills. So what do I do? Open the source at personal sacrifice for the greater good, or keep it closed and be a sort of hypocritical supporter of open source?

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Bug in Safari: options.length = 0; not working as expected in Safari 4

    - by Stefan
    This is not a real question, but rather an answer to save some others the hassle of tracking this nasty bug down. I wasted hours finding this out. When using options.length = 0; to reset all options of a select element in safari, you can get mixed results depending on wether you have the Web Inspector open or not. If the web inspector is open you use myElement.options.length = 0; and after that query the options.length(), you might get back 1 instead of 0 (expected) but only if the Web Inspector is open (which is often the case when debugging problem like this). Workaround: Close the Web Inspector or call myElement.options.length = 0; twice like so: myElement.options.length = 0; myElement.options.length = 0; Testcase: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Testcase</title> <script type="text/javascript" language="javascript" charset="utf-8"> function test(el){ var el = document.getElementById("sel"); alert("Before calling options.length=" + el.options.length); el.options.length = 0; alert("After calling options.length=" + el.options.length); } </script> </head> <body onLoad="test();"> <p> Make note of the numbers displayed in the Alert Dialog, then open Web inspector, reload this page and compare the numbers. </p> <select id="sel" multiple> <option label="a----------" value="a"></option> <option label="b----------" value="b"></option> <option label="c----------" value="c"></option> </select> </body> </html>

    Read the article

  • How do I make a GUI that behaves like this?

    - by Karl Knechtel
    This is difficult to explain without illustration, so - behold, an illustration, cobbled together from screenshots of a few hello-world examples and a lot of Paint work: I have started out using Windows Forms on .NET (via IronPython, but that shouldn't be important), and haven't been able to figure out very much. GUI libraries in general are very intimidating, simply because every class has so many possible attributes. Documentation is good at explaining what everything does, but not so good at helping you figure out what you need. I will be assembling the GUI dynamically, but I'm not expecting that to be the hard part. The sticking points for me right now are: How do I get text labels to size themselves automatically to the width of the contained text (so that the text doesn't clip, and I also don't reserve unnecessary space for them when resizing the window)? How do I make the vertical scrollbar always appear? Setting the VScroll property (why is this protected when AutoScroll is public, BTW?) doesn't seem to do anything. How come the horizontal scrollbar is not added by AutoScroll when contents are laid out vertically (via Dock = DockStyle.Top)? I can use a minimum size for panels to prevent the label and corresponding control from overlapping when the window is shrunk horizontally, but then the scrollbar doesn't appear and the control is inaccessible. How can I put limits on window resizing (e.g. set a minimum width) without disabling it completely? (Just set minimum/maximum sizes for the Form?) Related to that, is there any way to set minimum/maximum widths or heights without setting a minimum/maximum size (i.e. can I constrain the size in only one dimension)? Is there a built-in control suitable for hex editing or am I going to have to build something myself? ... And should I be using something else (perhaps something more capable?) I've heard WPF mentioned, but I understand that this involves XML and I really want to build a GUI from XML - I already have data in an object graph, and doing some kind of weird XML pseudo-serialization (in Python, no less!) in order to create a GUI seems incredibly roundabout.

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • How to approach copying objects with smart pointers as class attributes?

    - by tomislav-maric
    From the boost library documentation I read this: Conceptually, smart pointers are seen as owning the object pointed to, and thus responsible for deletion of the object when it is no longer needed. I have a very simple problem: I want to use RAII for pointer attributes of a class that is Copyable and Assignable. The copy and assignment operations should be deep: every object should have its own copy of the actual data. Also, RTTI needs to be available for the attributes (their type may also be determined at runtime). Should I be searching for an implementation of a Copyable smart pointer (the data are small, so I don't need Copy on Write pointers), or do I delegate the copy operation to the copy constructors of my objects as shown in this answer? Which smart pointer do I choose for simple RAII of a class that is copyable and assignable? (I'm thinking that the unique_ptr with delegated copy/assignment operations to the class copy constructor and assignment operator would make a proper choice, but I am not sure) Here's a pseudocode for the problem using raw pointers, it's just a problem description, not a running C++ code: // Operation interface class ModelOperation { public: virtual void operate = (); }; // Implementation of an operation called Special class SpecialModelOperation : public ModelOperation { private: // Private attributes are present here in a real implementation. public: // Implement operation void operate () {}; }; // All operations conform to ModelOperation interface // These are possible operation names: // class MoreSpecialOperation; // class DifferentOperation; // Concrete model with different operations class MyModel { private: ModelOperation* firstOperation_; ModelOperation* secondOperation_; public: MyModel() : firstOperation_(0), secondOperation_(0) { // Forgetting about run-time type definition from input files here. firstOperation_ = new MoreSpecialOperation(); secondOperation_ = new DifferentOperation(); } void operate() { firstOperation_->operate(); secondOperation_->operate(); } ~MyModel() { delete firstOperation_; firstOperation_ = 0; delete secondOperation_; secondOperation_ = 0; } }; int main() { MyModel modelOne; // Some internal scope { // I want modelTwo to have its own set of copied, not referenced // operations, and at the same time I need RAII to work for it, // as soon as it goes out of scope. MyModel modelTwo (modelOne); } return 0; }

    Read the article

  • C# Type Casting at Runtimefor Array.SetValue

    - by sprocketonline
    I'm trying to create an array using reflection, and insert values into it. I'm trying to do this for many different types so would like a createAndFillArray function capable of this : Type t1 = typeof(A); Type t2 = typeof(B); double exampleA = 22.5; int exampleB = 43; Array arrA = createAndFillArray(t1, exampleA); Array arrB = createAndFillArray(t2, exampleB); private Array createAndFillArray(Type t, object val){ Array arr = Array.CreateInstance( t, 1); //length 1 in this example only, real-world is of variable length. arr.SetValue( val, 0 ); //this causes the following error: "System.InvalidCastException : Object cannot be stored in an array of this type." return arr; } with the class A being as follows: public class A{ public A(){} private double val; public double Value{ get{ return val; } set{ this.val = value; } } public static implicit operator A(double d){ A a = new A(); a.Value = d; return a; } } and class B being very similar, but with int: public class B{ public B(){} private double val; public double Value{ get{ return val; } set{ this.val = value; } } public static implicit operator B(double d){ B b = new B(); b.Value = d; return b; } } I hoped that the implicit operator would have ensured that the double be converted to class A, or the int to class B, and the error avoided; but this is obviously not so. The above is used in a custom deserialization class, which takes data from a custom data format and fills in the corresponding .Net object properties. I'm doing this via reflection and at runtime, so I think both are unavoidable. I'm targeting the C# 2.0 framework. I've dozens, if not hundreds, of classes similar to A and B, so would prefer to find a solution which improved on the createAndFillArray method rather than a solution which altered these classes.

    Read the article

  • jQuery: load refuses to get dynamic content in IE6

    - by user260157
    jQuery refuses to load my dynamic content in IE6. All in FireFox & Safari works fine. Only IE6 is being a pain. When I try the a html with <p>Hello World</p> that works. Properly. But when loading a PHP it doesn't work! As you can see it's doing multiple things. <script type="text/javascript"> // When the document is ready set up our sortable with it's inherant function(s) $(document).ready(function() { // Sort list & amend in database function sortTableMenuAndReload() { var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); $("#menuList").load("PLUGINS/SortableMenu/sortableMenu_ajax.php"); } function sortTableOrder() { var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); } function sortTableOrderAndRemove(removeID) { $('#listItem_'+removeID).remove(); var order = $('#menuList').sortable('serialize'); $.post("PLUGINS/SortableMenu/process-sortable.php",order); $("#menuList").load("PLUGINS/SortableMenu/sortableMenu_ajax.php"); } $("#menuList > li > .remove").live('click', function () { var removeID = $(this).attr('id'); $.ajax({ type: 'post', url: 'PLUGINS/SortableMenu/removeLine.php', data: 'id='+removeID, success: sortTableOrderAndRemove(removeID) }); }); $("#menuList > li > .publish").live('click', function () { var publishID = $(this).attr('id'); $.ajax({ type: 'post', url: 'PLUGINS/SortableMenu/publishLine.php', data: 'id='+publishID, success: sortTableOrder }); }); $('#new_documents > li').draggable({ addClasses: false, helper:'clone', connectToSortable:'#menuList' }); $("#menuList").droppable({ addClasses: false, drop: function() { var clone = $("#menuList > li#newArticleTYPE1"); $(clone).attr("id","listItem_newArticleTYPE1"); } }); $("#menuList").sortable({ opacity: 0.6, handle : '.handle, .remove', update : sortTableMenuAndReload }); }); </script>

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Why is FF on OS X loosing jQuery-UI in click event handler?

    - by Jean-François Beauchamp
    In a web page using jQUery 1.7.1 and jQUery-UI 1.8.18, if I output $.ui in an alert box when the document is ready, I get [object Object]. However when using Firefox, if I output $.ui in a click event handler, I get 'undefined' as result. With other browsers (latest versions of IE, Chrome and Safari), the result is still [object Object] when clicking on the link. Here is my HTML Page: <!doctype html> <html> <head> <title></title> <script src="Scripts/jquery-1.7.1.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.18.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { alert($.ui); // ALERT A $(document).on("click", ".dialogLink", function () { alert($.ui); // ALERT B return false; }); }); </script> </head> <body> <a href="#" class="dialogLink">Click me!</a> </body> </html> In this post, I reduced to its simplest form another problem I was having described here: $(this).dialog is not a function. I created a new post for the sake of clarity, since the real question is different from the original one now that pin-pointed where the problem resided. UPDATE: IF I replace my alerts with simply alert($); I get this result for alert A: function (selector, context) { return new jQuery.fn.init(selector, context, rootjQuery); } and this one for alert B: function (a, b) { return new d.fn.init(a, b, g); } This does not make sense to me, although I may not be understanding well enough what $ is... UPDATE 2: I can only reproduce this problem using Firefox on OS X. On Firefox running on Windows 7, everything is fine.

    Read the article

  • Getting the responseText from XMLHttpRequest-Object

    - by Sammy46
    I wrote a cgi-script with c++ to return the query-string back to the requesting ajax object. I also write the query-string in a file in order to see if the cgi script works correctly. But when I ask in the html document for the response Text to be shown in a messagebox i get a blank message. here is my code: js: <script type = "text/javascript"> var XMLHttp; if(navigator.appName == "Microsoft Internet Explorer") { XMLHttp = new ActiveXObject("Microsoft.XMLHTTP"); } else { XMLHttp = new XMLHttpRequest(); } function getresponse () { XMLHttp.open ("GET", "http://localhost/cgi-bin/AJAXTest?" + "fname=" + document.getElementById('fname').value + "&sname=" + document.getElementById('sname').value,true); XMLHttp.send(null); } XMLHttp.onreadystatechange=function(){ if(XMLHttp.readyState == 4) { document.getElementById('response_area').innerHTML += XMLHttp.readyState; var x= XMLHttp.responseText alert(x) } } </script> First Names(s)<input onkeydown = "javascript: getresponse ()" id="fname" name="name"> <br> Surname<input onkeydown = "javascript: getresponse();" id="sname"> <div id = "response_area"> </div> C++: int main() { QFile log("log.txt"); if(!log.open(QIODevice::WriteOnly | QIODevice::Text)) { return 1; } QTextStream outLog(&log); QString QUERY_STRING= getenv("QUERY_STRING"); //if(QUERY_STRING!=NULL) //{ cout<<"Content-type: text/plain\n\n" <<"The Query String is: " << QUERY_STRING.toStdString()<< "\n"; outLog<<"Content-type: text/plain\n\n" <<"The Query String is: " <<QUERY_STRING<<endl; //} return 0; } I'm happy about every advice what to do! EDIT: the output to my logfile works just fine: Content-type: text/plain The Query String is: fname=hello&sname=world I just noticed that if i open it with IE8 i get the query-string. But only on the first "keydown" after that IE does nothing.

    Read the article

  • Unable to find assembly, C#

    - by PlasmaCube
    So, here's the deal. I've got two ASP.NET applications, both of which use SQLServer Session State management. They also both use the same server. I've got a custom session class in an external DLL, which fully implements serialization, and which both applications have referenced. Each application, in turn, has a class which inherits from the DLL class, and both applications use their own respective classes for their session state. Now, what I was trying to accomplish was that if you wanted to go to the other application, it could look in the session (they all use the same session key) and treat the existing object there as the base (the one from the DLL), extract whatever login info you need, then overwrite the session object with your own. Unfortunately, when the second application attempts to read the session, it seems that it looks for the DLL of the first application, and when it can't find it, it throws an exception. Is there a flaw in my logic? Here's an example: // Global.asax of the 1st app protected void Session_Start(object sender, EventArgs e) { Session.Add( "UserSessionKey", new FirstUserSession()); // FirstUserSession inherits from BaseUserSession } Now the second application: // Global.asax of 2nd app protected void Session_Start(object sender, EventArgs e) { if (Session["UserSessionKey"] != null) { BaseUserSession existing = (BaseUserSession)Session["UserSessionKey"]; SecondUserSession session = new SecondUserSession(); // This also inherits from BaseUserSession session.Authenticated = existing.Authenticated; session.Id = existing.Id; session.Role = existing.Role; Session.Add("UserSessionKey", session); } else { Session.Add("UserSessionKey", new SecondUserSession()); } } Here's the exception stack trace. In this case, "MyCBC" is the real name of the first app, and "ASPTesting" is the second app. [SerializationException: Unable to find assembly 'MyCBC, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.] System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly() +1871092 System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAssemblyInfo assemblyInfo, String name) +7545734 System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +120 System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) +52 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryObjectWithMapTyped record) +190 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadObjectWithMapTyped(BinaryHeaderEnum binaryHeaderEnum) +61 System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() +253 System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +168 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203 System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788 System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55 System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281 System.Web.SessionState.SessionStateItemCollection.get_Item(String name) +19 System.Web.SessionState.HttpSessionStateContainer.get_Item(String name) +13 System.Web.SessionState.HttpSessionState.get_Item(String name) +13 ASPTesting._Default.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\sarsstu\My Documents\Projects\Testing\ASPTesting\ASPTesting\Default.aspx.cs:20 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Thanks to everyone in advance.

    Read the article

  • Subclassing and adding data members

    - by Marius
    I have an hierarchy of classes that looks like the following: class Critical { public: Critical(int a, int b) : m_a(a), m_b(b) { } virtual ~Critical() { } int GetA() { return m_a; } int GetB() { return m_b; } void SetA(int a) { m_a = a; } void SetB(int b) { m_b = b; } protected: int m_a; int m_b; }; class CriticalFlavor : public Critical { public: CriticalFlavor(int a, int b, int flavor) : Critical(a, b), m_flavor(flavor) { } virtual ~CriticalFlavor() { } int GetFlavor() { return m_flavor; } void SetFlavor(int flavor) { m_flavor = flavor; } protected: int m_flavor; }; class CriticalTwist : public Critical { public: CriticalTwist(int a, int b, int twist) : Critical(a, b), m_twist(twist) { } virtual ~CriticalTwist() { } int GetTwist() { return m_twist; } void SetTwist(int twist) { m_twist = twist; } protected: int m_twist; }; The above does not seem right to me in terms of the design and what bothers me the most is the fact that the addition of member variables seems to drive the interface of these classes (the real code that does the above is a little more complex but still embracing the same pattern). That will proliferate when in need for another "Critical" class that just adds some other property. Does this feel right to you? How could I refactor such code? An idea would be to have just a set of interfaces and use composition when it comes to the base object like the following: class Critical { public: virtual int GetA() = 0; virtual int GetB() = 0; virtual void SetA(int a) = 0; virtual void SetB(int b) = 0; }; class CriticalImpl { public: CriticalImpl(int a, int b) : m_a(a), m_b(b) { } ~CriticalImpl() { } int GetA() { return m_a; } int GetB() { return m_b; } void SetA(int a) { m_a = a; } void SetB(int b) { m_b = b; } private: int m_a; int m_b; }; class CriticalFlavor { public: virtual int GetFlavor() = 0; virtual void SetFlavor(int flavor) = 0; }; class CriticalFlavorImpl : public Critical, public CriticalFlavor { public: CriticalFlavorImpl(int a, int b, int flavor) : m_flavor(flavor), m_critical(new CriticalImpl(a, b)) { } ~CriticalFlavorImpl() { delete m_critical; } int GetFlavor() { return m_flavor; } void SetFlavor(int flavor) { m_flavor = flavor; } int GetA() { return m_critical-GetA(); } int GetB() { return m_critical-GetB(); } void SetA(int a) { m_critical-SetA(a); } void SetB(int b) { m_critical-SetB(b); } private: int m_flavor; CriticalImpl* m_critical; };

    Read the article

  • How can I pipe input to a Java app with Perl?

    - by user319479
    I need to write a Perl script that pipes input into a Java program. This is related to this, but that didn't help me. My issue is that the Java app doesn't get the print statements until I close the handle. What I found online was that $| needs to be set to something greater than 0, in which case newline characters will flush the buffer. This still doesn't work. This is the script: #! /usr/bin/perl -w use strict; use File::Basename; $|=1; open(TP, "| java -jar test.jar") or die "fail"; sleep(2); print TP "this is test 1\n"; print TP "this is test 2\n"; print "tests printed, waiting 5s\n"; sleep(5); print "wait over. closing handle...\n"; close TP; print "closed.\n"; print "sleeping for 5s...\n"; sleep(5); print "script finished!\n"; exit And here is a sample Java app: import java.util.Scanner; public class test{ public static void main( String[] args ){ Scanner sc = new Scanner( System.in ); int crashcount = 0; while( true ){ try{ String input = sc.nextLine(); System.out.println( ":: INPUT: " + input ); if( "bananas".equals(input) ){ break; } } catch( Exception e ){ System.out.println( ":: EXCEPTION: " + e.toString() ); crashcount++; if( crashcount == 5 ){ System.out.println( ":: Looks like stdin is broke" ); break; } } } System.out.println( ":: IT'S OVER!" ); return; } } The Java app should respond to receiving the test prints immediately, but it doesn't until the close statement in the Perl script. What am I doing wrong? Note: the fix can only be in the Perl script. The Java app can't be changed. Also, File::Basename is there because I'm using it in the real script.

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • Can't insert a number into a C++ custom streambuf/ostream

    - by 0xbe5077ed
    I have written a custom std::basic_streambuf and std::basic_ostream because I want an output stream that I can get a JNI string from in a manner similar to how you can call std::ostringstream::str(). These classes are quite simple. namespace myns { class jni_utf16_streambuf : public std::basic_streambuf<char16_t> { JNIEnv * d_env; std::vector<char16_t> d_buf; virtual int_type overflow(int_type); public: jni_utf16_streambuf(JNIEnv *); jstring jstr() const; }; typedef std::basic_ostream<char16_t, std::char_traits<char16_t>> utf16_ostream; class jni_utf16_ostream : public utf16_ostream { jni_utf16_streambuf d_buf; public: jni_utf16_ostream(JNIEnv *); jstring jstr() const; }; // ... } // namespace myns In addition, I have made four overloads of operator<<, all in the same namespace: namespace myns { // ... utf16_ostream& operator<<(utf16_ostream&, jstring) throw(std::bad_cast); utf16_ostream& operator<<(utf16_ostream&, const char *); utf16_ostream& operator<<(utf16_ostream&, const jni_utf16_string_region&); jni_utf16_ostream& operator<<(jni_utf16_ostream&, jstring); // ... } // namespace myns The implementation of jni_utf16_streambuf::overflow(int_type) is trivial. It just doubles the buffer width, puts the requested character, and sets the base, put, and end pointers correctly. It is tested and I am quite sure it works. The jni_utf16_ostream works fine inserting unicode characters. For example, this works fine and results in the stream containing "hello, world": myns::jni_utf16_ostream o(env); o << u"hello, wor" << u'l' << u'd'; My problem is as soon as I try to insert an integer value, the stream's bad bit gets set, for example: myns::jni_utf16_ostream o(env); if (o.badbit()) throw "bad bit before"; // does not throw int32_t x(5); o << x; if (o.badbit()) throw "bad bit after"; // throws :( I don't understand why this is happening! Is there some other method on std::basic_streambuf I need to be implementing????

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • Between-request Garbage Collection using Passenger

    - by raphaelcm
    We're using Rails 3.0.7 and REE 1.8.7. Long-term, we will be upgrading, but at the moment it's not feasible. Following the advice of several blog posts, we've been tuning our GC, and have settings that work pretty well. But we would really like to run GC outside of the request-response cycle. I've tried patching Passenger per this post, and using the code supplied in this SO question. In both cases, GC does indeed happen between requests. However, every time the between-request GC happens, I see a bunch of this: MONGODB [INFO] Connecting... MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) Starting the New Relic Agent. Installed New Relic Browser Monitoring middleware SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` = 'pages' AND `refinery_settings`.`name` = 'use_marketable_urls' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 1 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \"false\"\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 1 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT SQL (0.0ms) SHOW TABLES RefinerySetting Load (4.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` IS NULL AND `refinery_settings`.`name` = 'user_image_sizes' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 17 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \n:small: 120x120>\n:medium: 280x280>\n:large: 580x580>\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 17 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT ******** Engine Extend: app/helpers/blog_posts_helper SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES ******** Engine Extend: app/models/user SQL (0.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES (etc, etc, etc) Which is what happens when rails "loads the world" when the app starts up. Basically, GC.start is re-loading the app for some reason. Because of this, between-request GC is much slower than inline GC. Is there a way around this? I would love to have snappy, between-request GC if possible. Thanks.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >