Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 235/254 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Perl - Calling subclass constructor from superclass (OO)

    - by Emmel
    This may turn out to be an embarrassingly stupid question, but better than potentially creating embarrassingly stupid code. :-) This is an OO design question, really. Let's say I have an object class 'Foos' that represents a set of dynamic configuration elements, which are obtained by querying a command on disk, 'mycrazyfoos -getconfig'. Let's say that there are two categories of behavior that I want 'Foos' objects to have: Existing ones: one is, query ones that exist in the command output I just mentioned (/usr/bin/mycrazyfoos -getconfig`. Make modifications to existing ones via shelling out commands. Create new ones that don't exist; new 'crazyfoos', using a complex set of /usr/bin/mycrazyfoos commands and parameters. Here I'm not really just querying, but actually running a bunch of system() commands. Affecting changes. Here's my class structure: Foos.pm package Foos, which has a new($hashref-{name = 'myfooname',) constructor that takes a 'crazyfoo NAME' and then queries the existence of that NAME to see if it already exists (by shelling out and running the mycrazyfoos command above). If that crazyfoo already exists, return a Foos::Existing object. Any changes to this object requires shelling out, running commands and getting confirmation that everything ran okay. If this is the way to go, then the new() constructor needs to have a test to see which subclass constructor to use (if that even makes sense in this context). Here are the subclasses: Foos/Existing.pm As mentioned above, this is for when a Foos object already exists. Foos/Pending.pm This is an object that will be created if, in the above, the 'crazyfoo NAME' doesn't actually exist. In this case, the new() constructor above will be checked for additional parameters, and it will go ahead and, when called using -create() shell out using system() and create a new object... possibly returning an 'Existing' one... OR As I type this out, I am realizing it is perhaps it's better to have a single: (an alternative arrangement) Foos class, that has a -new() that takes just a name -create() that takes additional creation parameters -delete(), -change() and other params that affect ones that exist; that will have to just be checked dynamically. So here we are, two main directions to go with this. I'm curious which would be the more intelligent way to go.

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Hierarchy inheritance

    - by reito
    I had faced the problem. In my C++ hierarchy tree I have two branches for entities of difference nature, but same behavior - same interface. I created such hierarchy trees (first in image below). And now I want to work with Item or Base classes independetly of their nature (first or second). Then I create one abstract branch for this use. My mind build (second in image below). But it not working. Working scheme seems (third in image below). It's bad logic, I think... Do anybody have some ideas about such hierarchy inheritance? How make it more logical? More simple for understanding? Image Sorry for my english - russian internet didn't help:) Update: You ask me to be more explicit, and I will be. In my project (plugins for Adobe Framemaker) I need to work with dialogs and GUI controls. In some places I working with WinAPI controls, and some other places with FDK (internal Framemaker) controls, but I want to work throw same interface. I can't use one base class and inherite others from it, because all needed controls - is a hierarchy tree (not one class). So I have one hierarchy tree for WinAPI controls, one for FDK and one abstract tree to use anyone control. For example, there is an Edit control (WinEdit and FdkEdit realization), a Button control (WinButton and FdkButton realization) and base entity - Control (WinControl and FdkControl realization). For now I can link my classes in realization trees (Win and Fdk) with inheritence between each of them (WinControl is base class for WinButton and WinEdit; FdkControl is base class for FdkButton and FdkEdit). And I can link to abstract classes (Control is base class for WinControl and FdkControl; Edit is base class for WinEdit and FdkEdit; Button is base class for WinButton and FdkButton). But I can't link my abstract tree - compiler swears. In fact I have two hierarchy trees, that I want to inherite from another one. Update: I have done this quest! :) I used the virtual inheritence and get such scheme (http://img12.imageshack.us/img12/7782/99614779.png). Abstract tree has only absolute abstract methods. All inheritence in abstract tree are virtual. Link from realization tree to abstract are virtual. On image shown only one realization tree for simplicity. Thanks for help!

    Read the article

  • SQL Invalid Object Name 'AddressType'

    - by salvationishere
    I am getting the above error in my VS 2008 C# method when I try to invoke the SQL getColumnNames stored procedure from VS. This SP accepts one input parameter, the table name, and works successfully from SSMS. Currently I am selecting the AdventureWorks AddressType table for it to pull the column names from this table. I can see teh AdventureWorks table available in VS from my Server Explorer / Data Connection. And I see both the AddressType table and getColumnNames SP showing in Server Explorer. But I am still getting this error listed above. Here is the C# code snippet I use to execute this: public static DataTable DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Data Source=.;AttachDbFilename=\"C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\AdventureWorks_Data.mdf\";Initial Catalog=AdventureWorks;Integrated Security=True;Connect Timeout=30;User Instance=False"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); try { cmd.CommandText = "dbo.getColumnNames"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = TableName; parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } And when I examine the errorMsg it says the following: " at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()\r\n at System.Data.SqlClient.SqlDataReader.get_MetaData()\r\n at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader()\r\n at ADONET_namespace.ADONET_methods.DisplayTableColumns(String tt) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 35" Where line 35 is dr = cmd.ExecuteReader();

    Read the article

  • Wordpress pages address rewrite

    - by kemp
    UPDATE I tried using the internal wordpress rewrite. What I have to do is an address like this: http://example.com/galleria/artist-name sent to the gallery.php page with a variable containing the artist-name. I used these rules as per Wordpress' documentation: // REWRITE RULES (per gallery) {{{ add_filter('rewrite_rules_array','wp_insertMyRewriteRules'); add_filter('query_vars','wp_insertMyRewriteQueryVars'); add_filter('init','flushRules'); // Remember to flush_rules() when adding rules function flushRules(){ global $wp_rewrite; $wp_rewrite->flush_rules(); } // Adding a new rule function wp_insertMyRewriteRules($rules) { $newrules = array(); $newrules['(galleria)/(.*)$'] = 'index.php?pagename=gallery&galleryname=$matches[2]'; return $newrules + $rules; } // Adding the id var so that WP recognizes it function wp_insertMyRewriteQueryVars($vars) { array_push($vars, 'galleryname'); return $vars; } what's weird now is that on my local wordpress test install, that works fine: the gallery page is called and the galleryname variable is passed. On the real site, on the other hand, the initial URL is accepted (as in it doesn't go into a 404) BUT it changes to http://example.com/gallery (I mean it actually changes in the browser's address bar) and the variable is not defined in gallery.php. Any idea what could possibly cause this different behavior? Alternatively, any other way I couldn't think of which could achieve the same effect described in the first three lines is perfectly fine. Old question What I need to do is rewriting this address: (1) http://localhost/wordpress/fake/text-value to (2) http://localhost/wordpress/gallery?somevar=text-value Notes: the remapping must be transparent: the user always has to see address (1) gallery is a permalink to a wordpress page, not a real address I basically need to rewrite the address first (to modify it) and then feed it back to mod rewrite again (to let wordpress parse it its own way). Problems if I simply do RewriteRule ^fake$ http://localhost/wordpress/gallery [L] it works but the address in the browser changes, which is no good, if I do RewriteRule ^fake$ /wordpress/gallery [L] I get a 404. I tried different flags instead of [L] but to no avail. How can I get this to work? EDIT: full .htaccess # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^fake$ /wordpress/gallery [R] RewriteBase /wordpress/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> # END WordPress

    Read the article

  • Why do sockets not die when server dies? Why does a socket die when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • KnockoutJS radio buttons not changing like checkboxes do

    - by Gaui
    I have the same data structure for checkboxes and radio buttons. When checking the checkboxes, they return correct boolean value ('chosen' variable). However, when I check the radio buttons, 'chosen' always changes to the 'value' (integer). Also the radio buttons don't get "checked" in the beginning, even though 'chosen' == true Javascript: function attributeValueViewModel(data) { var self = this; self.id = ko.observable(data.id); self.attributeID = ko.observable(data.attributeID); self.value = ko.observable(data.value); self.chosen = ko.observable(data.chosen); } function viewModel() { var self = this; self.attributeValues1 = ko.observableArray([]); self.attributeValues2 = ko.observableArray([]); self.addToList = function(data) { ko.utils.arrayForEach(data, function(item) { self.attributeValues1.push(new attributeValueViewModel(item)); self.attributeValues2.push(new attributeValueViewModel(item)); }); }; } var arr = [ { "id": 55, "attributeID": 28, "value": "Yes", "chosen": false, }, { "id": 56, "attributeID": 28, "value": "No", "chosen": true, }, { "id": 62, "attributeID": 28, "value": "Maybe", "chosen": false, } ]; var vm = new viewModel(); ko.applyBindings(vm); vm.addToList(arr); HTML <b>Checkbox:</b> <div id="test1"> <span data-bind="foreach: attributeValues1()"> <input type="checkbox" data-bind="value: id(), checked: chosen, attr: { name: 'test1' }" /> <span data-bind="text: value()"></span> <span data-bind="text: chosen()"></span> </span> </div> <br /> <b>Radio:</b> <div id="test2"> <span data-bind="foreach: attributeValues2()"> <input type="radio" data-bind="value: id(), checked: chosen, attr: { name: 'test2' }" /> <span data-bind="text: value()"></span> <span data-bind="text: chosen()"></span> </span> </div>? Here is my fiddle: http://jsfiddle.net/SN7Vn/1/ Can you please explain this behavior and why the radio buttons don't update boolean (like checkboxes do)?

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • cellForRowAtIndexPath called too late

    - by Mihai Fonoage
    Hi, I am trying to re-load a table every time some data I get from the web is available. This is what I have: SearchDataViewController: - (void)parseDatatXML { parsingDelegate = [[XMLParsingDelegate alloc] init]; parsingDelegate.searchDataController = self; // CONTAINS THE TABLE THAT NEEDS RE-LOADING; ImplementedSearchViewController *searchController = [[ImplementedSearchViewController alloc] initWithNibName:@"ImplementedSearchView" bundle:nil]; ProjectAppDelegate *delegate = [[UIApplication sharedApplication] delegate]; UINavigationController *nav = (UINavigationController *)[delegate.splitViewController.viewControllers objectAtIndex: 0]; NSArray *viewControllers = [[NSArray alloc] initWithObjects:nav, searchController, nil]; self.splitViewController.viewControllers = viewControllers; [viewControllers release]; // PASS A REFERENCE TO THE PARSING DELEGATE SO THAT IT CAN CALL reloadData on the table parsingDelegate.searchViewController = searchController; [searchController release]; // Build the url request used to fetch data ... NSURLRequest *dataURLRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:dataURL]]; parsingDelegate.feedConnection = [[[NSURLConnection alloc] initWithRequest:dataURLRequest delegate:parsingDelegate] autorelease]; } ImplementedSearchViewController: - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSLog(@"count = %d", [keys count]); // keys IS A NSMutableArray return [self.keys count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { ... cell.textLabel.text = [keys objectAtIndex:row]; ... } XMLParsingDelgate: -(void) updateSearchTable:(NSArray *)array { ... [self.currentParseBatch addObject:(NSString *)[array objectAtIndex:1]]; // RELOAD TABLE [self.searchViewController.table reloadData]; } - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if ([elementName isEqualToString:@"..."]) { self.currentParseBatch = [NSMutableArray array]; searchViewController.keys = self.currentParseBatch; ... } ... } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if ([elementName isEqualToString:@"..."]) { ... [self performSelectorOnMainThread:@selector(updateSearchTable:) withObject:array waitUntilDone:NO]; } ... } My problem is that when I debug, the calls go between reloadData and numberOfRowsInSection until the keys array is filled with the last data, time at which the cellForRowAtIndexPath gets called. I wanted the table to be updated for each element I send, one by one, instead of just in the end. Any ideas why this behavior? Thank you!

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • jQuery action being called when selector isn't met?

    - by dougoftheabaci
    I've been working on a prototype for a client's web site and I've run into a rather significant snag. You can view the prototype here. As you can see, the way it works is you can scroll a set of slides horizontally and, by clicking one, open a stack containing yet more slides. If you then click again on an image in that stack it opens up a lightbox. Clicking on another stack or the close button will close that stack (and open another, as case may be). That all works great. However you get some weird behavior if you do the following: Click to open any stack. Click to open an image's light box (this works best if you click on the image that's level with the main list). Close the light box and the stack either by clicking the close button or clicking on another stack. Click back to the first stack. Instead of reopening the stack, you get the lightbox. This confuses me as the light box should only ever be called if there is a class on the containing UL and that class is removed when the lightbox is closed. I've checked and double-checked this, it's definitely missing. Here are the respective functions: $("ul.hide a.lightbox").live("click",function(){ $("ul.show").removeClass("show").addClass("hide"); $(this).parent().parent().removeClass("hide").addClass("show"); $("ul.hide").animate({opacity: 0.2}); $("ul.show").animate({opacity: 1}); $("#next").animate({opacity: 0.2}); $("#prev").animate({opacity: 0.2}); return false; }); $("ul.show a.lightbox").live("click",function(){ $(this).fancybox().trigger("click"); return false; }); As you can see, in order for the lightbox to be called the containing UL has to have the class of show. However, if you check it with Firebug it won't. For those who are curious, the added .trigger("click"); is because the lightbox will require a double-click to launch otherwise. Any idea how I can fix this?

    Read the article

  • ZF-Autoloader not working in UnitTests on Ubuntu

    - by Sam
    i got a problem regarding Unit-testing a Zend-Framework application under Ubuntu 12.04. The project-structure is a default zend application whereas the models are defined as the following ./application ./models ./DbTable ./ProjectStatus.php (Application_Model_DbTable_ProjectStatus) ./Mappers ./ProjectStatus.php (Application_Model_Mapper_ProjectStatus) ./ProjectStatus.php (Application_Model_ProjectStatus) The Problem here is with the Zend-specific autoloading. The naming convention here appears that the folder Mappers loads all classes with _Mapper but not _Mappers. This is some internal Zend behavior which is fine so far. On my windows machine the phpunit runs without any Problems, trying to initiate all those classes. On my Ubuntu machine however with jenkins running on it, phpunit fails to find the appropriate classes giving me the following error Fatal error: Class 'Application_Model_Mapper_ProjectStatus' not found in /var/lib/jenkins/jobs/PAM/workspace/tests/application/models/Mapper/ProjectStatusTest.php on line 39 The error appears to really be that the Zend-Autoloader doesn't load from the ubuntu machine, but i can't figure out how or why this works. The question remains of why this is. I think i've double checked every point of contact with the zend autoloading stuff, but i just can't figure this out. I'll paste the - from my point of view relevant snippets - and hope someone of you has any insight to this. Jenkins Snippet for PHPUnit <target name="phpunit" description="Run unit tests with PHPUnit"> <exec executable="phpunit" failonerror="true"> <arg line="--configuration '${basedir}/tests/phpunit.xml' --coverage-clover '${basedir}/build/logs/clover.xml' --coverage-html '${basedir}/build/coverage/.' --log-junit '${basedir}/build/logs/junit.xml'" /> </exec> </target> ./tests/phpunit.xml <phpunit bootstrap="./bootstrap.php"> ... this shouldn't be of relevance ... </phpunit> ./tests/bootstrap.php <?php // Define path to application directory defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); // Define application environment defined('APPLICATION_ENV') || define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'testing')); // Ensure library/ is on include_path set_include_path(implode(PATH_SEPARATOR, array( realpath(APPLICATION_PATH . '/../library'), get_include_path(), ))); require_once 'Zend/Loader/Autoloader.php'; Zend_Loader_Autoloader::getInstance(); Any help will be appreciated.

    Read the article

  • are C functions declared in <c____> headers guaranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); } EDIT: The closest I've found is this (17.4.1.2p4): Except as noted in clauses 18 through 27, the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause 7), or ISO/IEC:1990 Programming Languages—C AMENDMENT 1: C Integrity, (Clause 7), as appropriate, as if by inclusion. In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std. which to be honest I could interpret either way. "the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C" tells me that they may be required in the global namespace, but "In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std." says they are in std (but doesn't specify any other scoped they are in).

    Read the article

  • Why sockets does not die when server dies? Why socket dies when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Doing some stuff right before the user exits the page

    - by Mike
    I have seen some questions here regarding what I want to achieve and have based what I have so far on those answer. But there is a slight misbehavior that is still irritating me. What I have is sort of a recovery feature. Whenever you are typing text, the client sends a sync request to the server every 45 seconds. It does 2 things. First, it extends the lease the client has on the record (only one person may edit at one time) for another 60 seconds. Second, it sends the text typed so far to the server in case the server crashes, internet connection fails, etc. In that case, the next time the user enters our application, the user is notified that something has gone wrong and that some text was recovered. Think of Microsoft or OpenOffice recovery whenever they crash! Of course, if the user leaves the page willingly, the user does not need to be notified and as a result, the recovery is deleted. I do that final request via a beforeunload event. Everything went fine until I was asked to make a final adjustment... The same behavior you have here at stack overflow when you exit the editor... a confirm dialogue. This works so far, BUT, the confirm dialogue is shown twice. Here is the code. The event if (local.sync.autosave_textelement) { window.onbeforeunload = exitConfirm; } The function function exitConfirm() { var local = Core; if (confirm('blub?')) { local.sync.autosave_destroy = true; sync(false); return true; } else { return false; } }; Some problem irrelevant clarifications: Core is a global Object that contains a lot of variables that are used everywhere. sync makes an ajax request. The values are based on the values that the Core.sync object contains. The parameter determines if the call should be async (default) or sync. Edit 1 I did try to separate both things (recovery deletion and user confirmation that is) into beforeunload and unload. The problem there was that unload is a bit too late. The user gets informed that there is a recovery even though it is scheduled to be deleted. If you refresh the page 1 second later, the dialogue disappears as the file was deleted by then.

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • current_user.user_type_id = @employer ID

    - by sscirrus
    I am building a system with a User model (authenticated using AuthLogic) and three user types in three models: one of these models is Employer. Each of these three models has_many :users, :as = :authenticable. I start by having a new visitor to the site create their own 'User' record with username, password, which user type they are, etc. Upon creation, the user is sent to the 'new' action for one of the three models. So, if they tell us they are an employer, we redirect_to :controller = "employers, :action = "new". Question: When the employer has submitted, I want to set the current_user.user_type_id equal to the employer ID. This should be simple... but it's not working. # Employers Controller / new def new @employer = Employer.new 1.times {@employer.addresses.build} render :layout => 'forms' end # Employers Controller / create def create @employer = Employer.new(params[:employer]) if @employer.save if current_user.blank? redirect_to :controller => "users", :action => "new" else current_user.user_type_id = @employer.id current_user.user_type = "Employer" redirect_to :action => "home", :id => current_user.user_type_id end else render :action => "new" end end ------UPDATE------ Hi guys. In response: I am using this table structure because each of my three user type models have lots of different fields and each has different relationships to the other models, which is why I've avoided STI. By 1.times (@employer.addresses.build) I'm connecting the employer model to the address polymorphic table in one form, so I'm asking the controller to build a new address to go along with the new employer. Averell: you mentioned encapsulating... something in the model using a 'setter' method. I have no idea what you mean by this - could you please explain how this works (or direct me to an example elsewhere)? With tsdbrown's answer I have managed to create the behavior I want... if there's a more elegant way to accomplish the same thing I'd love to learn how. Thanks very much. Thanks to tsdbrown for answering the current_user.save problem!

    Read the article

  • validate uniqueness amongst multiple subclasses with Single Table Inheritance

    - by irkenInvader
    I have a Card model that has many Sets and a Set model that has many Cards through a Membership model: class Card < ActiveRecord::Base has_many :memberships has_many :sets, :through => :memberships end class Membership < ActiveRecord::Base belongs_to :card belongs_to :set validates_uniqueness_of :card_id, :scope => :set_id end class Set < ActiveRecord::Base has_many :memberships has_many :cards, :through => :memberships validates_presence_of :cards end I also have some sub-classes of the above using Single Table Inheritance: class FooCard < Card end class BarCard < Card end and class Expansion < Set end class GameSet < Set validates_size_of :cards, :is => 10 end All of the above is working as I intend. What I'm trying to figure out is how to validate that a Card can only belong to a single Expansion. I want the following to be invalid: some_cards = FooCard.all( :limit => 25 ) first_expansion = Expansion.new second_expansion = Expansion.new first_expansion.cards = some_cards second_expansion.cards = some_cards first_expansion.save # Valid second_expansion.save # **Should be invalid** However, GameSets should allow this behavior: other_cards = FooCard.all( :limit => 10 ) first_set = GameSet.new second_set = GameSet.new first_set.cards = other_cards # Valid second_set.cards = other_cards # Also valid I'm guessing that a validates_uniqueness_of call is needed somewhere, but I'm not sure where to put it. Any suggestions? UPDATE 1 I modified the Expansion class as sugested: class Expansion < Set validate :validates_uniqueness_of_cards def validates_uniqueness_of_cards membership = Membership.find( :first, :include => :set, :conditions => [ "card_id IN (?) AND sets.type = ?", self.cards.map(&:id), "Expansion" ] ) errors.add_to_base("a Card can only belong to a single Expansion") unless membership.nil? end end This works when creating initial expansions to validate that no current expansions contain the cards. However, this (falsely) invalidates future updates to the expansion with new cards. In other words: old_exp = Expansion.find(1) old_exp.card_ids # returns [1,2,3,4,5] new_exp = Expansion.new new_exp.card_ids = [6,7,8,9,10] new_exp.save # returns true new_exp.card_ids << [11,12] # no other Expansion contains these cards new_exp.valid? # returns false ... SHOULD be true

    Read the article

  • Android Notification with AlarmManager, Broadcast and Service

    - by user2435829
    this is my code for menage a single notification: myActivity.java public class myActivity extends Activity { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mylayout); cal = Calendar.getInstance(); // it is set to 10.30 cal.set(Calendar.HOUR, 10); cal.set(Calendar.MINUTE, 30); cal.set(Calendar.SECOND, 0); long start = cal.getTimeInMillis(); if(cal.before(Calendar.getInstance())) { start += AlarmManager.INTERVAL_FIFTEEN_MINUTES; } Intent mainIntent = new Intent(this, myReceiver.class); pIntent = PendingIntent.getBroadcast(this, 0, mainIntent, PendingIntent.FLAG_UPDATE_CURRENT); AlarmManager myAlarm = (AlarmManager)getSystemService(ALARM_SERVICE); myAlarm.setRepeating(AlarmManager.RTC_WAKEUP, start, AlarmManager.INTERVAL_FIFTEEN_MINUTES, pIntent); } } myReceiver.java public class myReceiver extends BroadcastReceiver { @Override public void onReceive(Context c, Intent i) { Intent myService1 = new Intent(c, myAlarmService.class); c.startService(myService1); } } myAlarmService.java public class myAlarmService extends Service { @Override public IBinder onBind(Intent arg0) { return null; } @Override public void onCreate() { super.onCreate(); } @SuppressWarnings("deprecation") @Override public void onStart(Intent intent, int startId) { super.onStart(intent, startId); displayNotification(); } @Override public void onDestroy() { super.onDestroy(); } public void displayNotification() { Intent mainIntent = new Intent(this, myActivity.class); PendingIntent pIntent = PendingIntent.getActivity(this, 0, mainIntent, PendingIntent.FLAG_UPDATE_CURRENT); NotificationManager nm = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); Notification.Builder builder = new Notification.Builder(this); builder.setContentIntent(pIntent) .setAutoCancel(true) .setSmallIcon(R.drawable.ic_noti) .setTicker(getString(R.string.notifmsg)) .setContentTitle(getString(R.string.app_name)) .setContentText(getString(R.string.notifmsg)); nm.notify(0, builder.build()); } } AndroidManifest.xml <uses-permission android:name="android.permission.WAKE_LOCK" /> ... ... ... <service android:name=".myAlarmService" android:enabled="true" /> <receiver android:name=".myReceiver"/> IF the time has NOT past yet everything works perfectly. The notification appears when it must appear. BUT if the time HAS past (let's assume it is 10.31 AM) the notification fires every time... when I close and re-open the app, when I click on the notification... it has a really strange behavior. I can't figure out what's wrong in it. Can you help me please (and explain why, if you find a solution), thanks in advance :)

    Read the article

  • Setting background-image with javascript

    - by Mattoe3k
    In chrome, safari, and opera setting the background image to an absolute reference such as: "/images/image.png" changes it to "http://sitepath/images/image.png". It does not do this in firefox. Is there any way to avoid this behavior, or is it written into the browser's javascript engine? Using jquery to set the background-image also does this problem. The problem is that I am posting the HTML to a php script that needs the urls in this specific format. I know that setting the image path relative fixes this, but I can't do that. The only other alternative would be to use a regexp. to convert the urls. Thanks. Test this in firefox, and chrome / webkit browser: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <div style="height:400px;width:400px;background-image:url(/images/images/logo.gif);"> </div> <br /> <br /> <div id="test" style="height:400px;width:400px;"> </div> <script type="text/javascript" src="/javascripts/jquery.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#test").css('background-image',"url(/images/images/logo.gif)"); alert(document.getElementById('test').style.backgroundImage); }); </script> </body> </html>

    Read the article

  • jQuery multiple class selection

    - by morpheous
    I am a bit confused with this: I have a page with a set of buttons (i.e. elements with class attribute 'button'. The buttons belong to one of two classes (grp1 and grp2). These are my requirements For buttons with class enabled, when the mouse hovers over them, a 'button-hover' class is added to them (i.e. the element the mouse is hovering over). Otherwise, the hover event is ignored When one of the buttons with class grp2 is clicked on (it has to be 'enabled' first), then I disable (i.e. remove the 'enabled' class for all elements with class 'enabled' (should probably selecting for elements with class 'button' AND 'enabled' - but I am having enough problems as it is, so I need to keep things simple for now). This is what my page looks like: <html> <head> <title>Demo</title> <style type="text/css" .button {border: 1px solid gray; color: gray} .enabled {border: 1px solid red; color: red} .button-hover {background-color: blue; } </style> <script type="text/javascript" src="jquery.js"></script> </head> <body> <div class="btn-cntnr"> <span class="grp1 button enabled">button 1</span> <span class="grp2 button enabled">button 2</span> <span class="grp2 button enabled">button 3</span> <span class="grp2 button enabled">button 4</span> </div> <script type="text/javascript"> /* <![CDATA[ */ $(document).ready(function(){ $(".button.enabled").hover(function(){ $(this).toggleClass('button-hover'); }, function() { $(this).toggleClass('button-hover'); }); $('.grp2.enabled').click(function(){ $(".grp2").removeClass('enabled');} }); /* ]]> */ </script> </body> </html> Currently, when a button with class 'grp2' is clicked on, the other elements with class 'grp2' have the 'enabled' class removed (works correctly). HOWEVER, I notice that even though the class no longer have a 'enabled' class, SOMEHOW, the hover behaviour is still applied to these elemets (WRONG). Once an element has been 'disabled', I no longer want it to respond to the hover() event. How may I implement this behavior, and what is wrong with the code above (i.e. why is it not working? (I am still learning jQuery)

    Read the article

  • CultureManager issue

    - by Serge
    I have a bug I don't understand. While the following works fine: Resources.Classes.AFieldFormula.DirectFieldFormula this one throws an exception: new ResourceManager(typeof(Resources.Classes.AFieldFormula)).GetString("DirectFieldFormula"); Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"Resources.Classes.AFieldFormula.resources\" was correctly embedded or linked into assembly \"MygLogWeb\" at compile time, or that all the satellite assemblies required are loadable and fully signed. How comes? Resource designer.cs file: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.18408 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace Resources.Classes { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] public class AFieldFormula { private static global::System.Resources.ResourceManager resourceMan; private static global::System.Globalization.CultureInfo resourceCulture; [global::System.Diagnostics.CodeAnalysis.SuppressMessageAttribute("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")] internal AFieldFormula() { } /// <summary> /// Returns the cached ResourceManager instance used by this class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Resources.ResourceManager ResourceManager { get { if (object.ReferenceEquals(resourceMan, null)) { global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MygLogWeb.Classes.AFieldFormula", typeof(AFieldFormula).Assembly); resourceMan = temp; } return resourceMan; } } /// <summary> /// Overrides the current thread's CurrentUICulture property for all /// resource lookups using this strongly typed resource class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Globalization.CultureInfo Culture { get { return resourceCulture; } set { resourceCulture = value; } } /// <summary> /// Looks up a localized string similar to Direct field. /// </summary> public static string DirectFieldFormula { get { return ResourceManager.GetString("DirectFieldFormula", resourceCulture); } } } }

    Read the article

  • Sort a list of pointers.

    - by YuppieNetworking
    Hello all, Once again I find myself failing at some really simple task in C++. Sometimes I wish I could de-learn all I know from OO in java, since my problems usually start by thinking like Java. Anyways, I have a std::list<BaseObject*> that I want to sort. Let's say that BaseObject is: class BaseObject { protected: int id; public: BaseObject(int i) : id(i) {}; virtual ~BaseObject() {}; }; I can sort the list of pointer to BaseObject with a comparator struct: struct Comparator { bool operator()(const BaseObject* o1, const BaseObject* o2) const { return o1->id < o2->id; } }; And it would look like this: std::list<BaseObject*> mylist; mylist.push_back(new BaseObject(1)); mylist.push_back(new BaseObject(2)); // ... mylist.sort(Comparator()); // intentionally omitted deletes and exception handling Until here, everything is a-ok. However, I introduced some derived classes: class Child : public BaseObject { protected: int var; public: Child(int id1, int n) : BaseObject(id1), var(n) {}; virtual ~Child() {}; }; class GrandChild : public Child { public: GrandChild(int id1, int n) : Child(id1,n) {}; virtual ~GrandChild() {}; }; So now I would like to sort following the following rules: For any Child object c and BaseObject b, b<c To compare BaseObject objects, use its ids, as before. To compare Child objects, compare its vars. If they are equal, fallback to rule 2. GrandChild objects should fallback to the Child behavior (rule 3). I initially thought that I could probably do some casts in Comparator. However, this casts away constness. Then I thought that probably I could compare typeids, but then everything looked messy and it is not even correct. How could I implement this sort, still using list<BaseObject*>::sort ? Thank you

    Read the article

  • When can Java produce a NaN (with specific code question)

    - by Brent
    I'm a bit perplexed by some code I'm currently writing. I am trying to preform a specific gradient descent (main loop included below) and depending on the initial conditions I will alternatively get good looking results (perhaps 20% of the time) or everything becomes NaN (the other 80% of the time). However it seems to me that none of the operations in my code could produce NaN's when given honest numbers! My main loop is: // calculate errors delta = m1 + m2 - M; eta = f1 + f2 - F; for (int i = 0; i < numChildren; i++) { epsilon[i] = p[i]*m1+(1-p[i])*m2+q[i]*f1+(1-q[i])*f2-C[i]; } // use errors in gradient descent // set aside differences for the p's and q's float mDiff = m1 - m2; float fDiff = f1 - f2; // first update m's and f's m1 -= rate*delta; m2 -= rate*delta; f1 -= rate*eta; f2 -= rate*eta; for (int i = 0; i < numChildren; i++) { m1 -= rate*epsilon[i]*p[i]; m2 -= rate*epsilon[i]*(1-p[i]); f1 -= rate*epsilon[i]*q[i]; f2 -= rate*epsilon[i]*(1-q[i]); } // now update the p's and q's for (int i = 0; i < numChildren; i++) { p[i] -= rate*epsilon[i]*mDiff; q[i] -= rate*epsilon[i]*fDiff; } This behavior can be seen when we have rate = 0.01; M = 30; F = 30; C = {15, 25, 35, 45}; with the p[i] and q[i] chosen randomly uniformly between 0 and 1, m1 and m2 chosen randomly uniformly to add to M, and f1 and f2 chosen randomly uniformly to add up to F. Does anyone see anything that could create these NaN's?

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >