Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 380/414 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Creating same-width hit-zones on MenuItems in ASP.NET 2.0 Menus that include MenuItems added at runt

    - by Cary Jensen
    In an ASP.NET 2.0 application, I want to permit a user to select a MenuItem, even if the user does not click the actual text of the MenuItem, but instead only clicks the highlight area that ASP.NET places around the currently selected MenuItem (represented by the DynamicHoverStyle.BackColor property). Since the BackColor is displayed the same width for each MenuItem in a submenu, based on MenuItem with the longest text, I would like to make the hit-zone (clickable area) of each sub-MenuItem the same width (same at the BackColor area), regardless of how much text is displayed in the in each individual sub-MenuItem. Here's the setup. I am using a Menu on a MasterPage to display a similar menu on each of my pages. Some of the pages suppress this menu, and some of them add addition MenuItems, sometimes to the top level, sometimes adding sub-MenuItems to an existing top-level MenuItem, and sometimes both (adding a MenuItem to the top level and then additional MenuItems as submenuitems to that newly added top level. This menu has a horizontal orientation, and it is dynamic, in that only the top level is initially exposed, and the submenus are displayed when selected. During usability testing, we noticed that users would select a top-level menu item to expose the submenu, and then select a submenu item, but not by necessarily clicking on the submenu item text, but instead clicking on the BackColor area of the submenu item. Since the text of some MenuItems are longer than others, MenuItems with short Text have a rather large BackColor area. When the user clicks on the BackColor area, but not directly on the MenuItem Text, nothing happens, since the user didn't actually click on the submenu item hit zone. Although there are visual cues as to what part of the displayed MenuItem is clickable (the mouse pointer changes to a link cursor when the mouse is positioned on the MenuItem Text, but not when it is only hovering over the BackColor), this behavior confused the users. They highlighted a MenuItem, and clicked it, but nothing happened. I would to make clicking a MenuItem successful, even if the user did not click on the actual Text of the MenuItem, but simply click on the BackColor area. It seems like there should be a property somewhere to control the width of the active area of the displayed MenuItems, but I do not see it. Any suggestions, given that I am creating some of these MenuItems at runtime?

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • Complex HashMap has different hashCode after serialization

    - by woezelmann
    I am parsing a xml file into a complex HashMap looking like this: Map<String, Map<String, EcmObject> EcmObject: public class EcmObject implements Comparable, Serializable { private final EcmObjectType type; private final String name; private final List<EcmField> fields; private final boolean pages; // getter, equals, hashCode } EcmObjectType: public enum EcmObjectType implements Serializable { FOLDER, REGISTER, DOCUMENT } EcmField public class EcmField implements Comparable, Serializable { private final EcmFieldDataType dataType; private final EcmFieldControlType controlType; private final String name; private final String dbname; private final String internalname; private final Integer length; // getter, equals, hashCode } EcmFieldDataType public enum EcmFieldDataType implements Serializable { TEXT, DATE, NUMBER, GROUP, DEC; } and EcmFieldControlType public enum EcmFieldControlType implements Serializable{ DEFAULT, CHECKBOX, LIST, DBLIST, TEXTAREA, HIERARCHY, TREE, GRID, RADIO, PAGECONTROL, STATIC; } I have overwritten all hashCode and equal methods by usind commons lang's EqualsBuilder and HashCodeBuilder. Now when I copy a A HashMap this way: Map<String, Map<String, EcmObject>> m = EcmUtil.convertXmlObjectDefsToEcmEntries(new File("e:\\objdef.xml")); Map<String, Map<String, EcmObject>> m2; System.out.println(m.hashCode()); ByteArrayOutputStream baos = new ByteArrayOutputStream(8 * 4096); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(m); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); ObjectInputStream ois = new ObjectInputStream(bais); m2 = (Map<String, Map<String, EcmObject>>) ois.readObject(); System.out.println(m.hashCode()); System.out.println(m2.hashCode()); m.hashCode() is not equal to m2.hashCode() here is my output: -1639352210 -2071553208 1679930154 Another strange thing is, that eg. 10 times m has the same hashcode and suddenly on the 11th time the hashcode is different... Any ideas what this is about?

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • next line character a huge influence on xmlparser?

    - by jovany
    I have question about a basic xml file I'm parsing and just putting in simple nextlines(Enters). I'll try to explain my problem with this next example. I'm( still) building an xml tree and all it has to do ( this is a testtree ) is put the summary in an itemlist. I then export it to a plist so I can see if everything is done correctly. A method that does this is in the parser which looks like this if([elementName isEqualToString:@"Book"]) { [appDelegate.books addObject:aBook]; [aBook release]; aBook = nil; } else { [aBook setValue:currentElementValue forKey:elementName]; NSString *directions = [NSString stringWithFormat:currentElementValue]; [directionTree = setObject:directions forKey:@"directions"]; } [currentElementValue release]; currentElementValue = nil; } the export for the plistfile happens at the endtag of books. Below is the first xmlfile <?xml version="1.0" encoding="UTF-8"?> <Books><Book id="1"><summary>Ero adn the ancient quest to measure the globe.</summary></Book><Book id="2"><summary>how the scientific revolution began.</summary></Book></Books> This is my output http://img139.imageshack.us/img139/9175/picture6rtn.png If I make some adjustments like here <?xml version="1.0" encoding="UTF-8"?> <Books><Book id="1"> <summary>Ero adn the ancient quest to measure the globe.</summary> </Book> <Book id="2"> <summary>how the scientific revolution began.</summary> </Book> </Books> My directions key with type string remains empty... http://img248.imageshack.us/img248/5838/picture7y.png I never knew that if I just put in an enter it would have such an influence. Does anyone know a solution to this since my real xml file looks like this. ps. the funny thing is I can actually see ( when debugging)my directions string (NSString directions ) fill up with the currentElementValue in both cases.

    Read the article

  • Can't use attached property on combobox inside hierarchical datatemplate WPF

    - by jesse_t_r
    I'm hoping to use an attached property to assign a command to the selection changed event of a combobox that is embedded inside a treeview. I'm attempting to set the attached property inside the hierchical data template for the tree but the command is not set and does not fire when the item in the combobox is changed. I've found that setting the attached property directly on a combobox outside of a datatemplate works fine; here is how I'm trying to set the property in the template: <HierarchicalDataTemplate x:Key="template1" ItemsSource="{Binding Path=ChildColumns}"> <Border Background="{StaticResource TreeItem_Background}" BorderBrush="Blue" BorderThickness="2" CornerRadius="5" Margin="2,5,5,2" HorizontalAlignment="Left" > <Grid> <Grid.ColumnDefinitions > <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock MinWidth="80" HorizontalAlignment="Left" Grid.Column="0" Margin="5,2,2,2" Grid.Row ="0" Text="{Binding Path=ColName}"/> <ComboBox Name="cboColType" Grid.Column="1" HorizontalAlignment="Right" ItemsSource="{Binding Source={StaticResource dataFromEnum}}" SelectedItem="{Binding Path=ColumnType}" Margin="2,2,2,2" local:ItemSelectedBehavior.ItemSelected="{Binding Path=LoadConfigCommand}" /> </Grid> </Border> </HierarchicalDataTemplate> I also tried creating a style <Style x:Key="childItemStyle" TargetType="{x:Type FrameworkElement}"> <Setter Property="local:ItemSelectedBehavior.ItemSelected" Value="{Binding Path=LoadConfigCommand}" /> </Style> and setting the itemcontainerstyle to the style in the hierarchical datatemplate..still no luck .. <HierarchicalDataTemplate> ... <ComboBox Name="cboColType" Grid.Column="1" HorizontalAlignment="Right" ItemsSource="{Binding Source={StaticResource dataFromEnum}}" SelectedItem="{Binding Path=ColumnType}" Margin="2,2,2,2" ItemContainerStyle={StaticeResource childItemStyle}" /> ... </HierarchicalDataTemplate> I'm still learning a lot about WPF so I'm assuming there is something particular about the hierchical datatemplate that is not allowing the attache dproperty to be set..I have found similar posts in the forums and tried to implement their solutions as above, but after a day of searching and experimenting wiht no luck I'm hoping some one has an idea about this...

    Read the article

  • Problems using wxWidgets (wxMSW) within multiple DLL instances

    Preface I'm developing VST-plugins which are DLL-based software modules and loaded by VST-supporting host applications. To open a VST-plugin the host applications loads the VST-DLL and calls an appropriate function of the plugin while providing a native window handle, which the plugin can use to draw it's GUI. I managed to port my original VSTGUI code to the wxWidgets-Framework and now all my plugins run under wxMSW and wxMac but I still have problems under wxMSW to find a correct way to open and close the plugins and I am not sure if this is a wxMSW-only issue. Problem If I use any VST-host application I can open and close multiple instances of one of my VST-plugins without any problems. As soon as I open another of my VST-plugins besides my first VST-plugin and then close all instances of my first VST-plugin the application crashes after a short amount of time within the wxEventHandlerr::ProcessEvent function telling me that the wxTheApp object isn't valid any longer during execution of wxTheApp-FilterEvent (see below). So it seems to be that the wxTheApp objects was deleted after closing all instances of the first plugin and is no longer available for the second plugin. bool wxEvtHandler::ProcessEvent(wxEvent& event) { // allow the application to hook into event processing if ( wxTheApp ) { int rc = wxTheApp->FilterEvent(event); if ( rc != -1 ) { wxASSERT_MSG( rc == 1 || rc == 0, _T("unexpected wxApp::FilterEvent return value") ); return rc != 0; } //else: proceed normally } .... } Preconditions 1.) All my VST-plugins a dynamically linked against the C-Runtime and wxWidgets libraries. With regard to the wxWidgets forum this seemed to be the best way to run multiple instances of the software side by side. 2.) The DllMain of each VST-Plugin is defined as follows: // WXW #include "wx/app.h" #include "wx/defs.h" #include "wx/gdicmn.h" #include "wx/image.h" #ifdef __WXMSW__ #include <windows.h> #include "wx/msw/winundef.h" BOOL APIENTRY DllMain ( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: { wxInitialize(); ::wxInitAllImageHandlers(); break; } case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: wxUninitialize(); break; } return TRUE; } #endif // __WXMSW__ class Application : public wxApp {}; IMPLEMENT_APP_NO_MAIN(Application) Question How can I prevent this behavior respectively how can I properly handle the wxTheApp object if I have multiple instances of different VST-plugins (DLL-modules), which are dynamically linked against the C-Runtime and wxWidgets libraries? Best reagards, Steffen

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • Symfony2 - PdfBundle not working

    - by ElPiter
    Using Symfony2 and PdfBundle to generate dynamically PDF files, I don't get to generate the files indeed. Following documentation instructions, I have set up all the bundle thing: autoload.php: 'Ps' => __DIR__.'/../vendor/bundles', 'PHPPdf' => __DIR__.'/../vendor/PHPPdf/lib', 'Imagine' => array(__DIR__.'/../vendor/PHPPdf/lib', __DIR__.'/../vendor/PHPPdf/lib/vendor/Imagine/lib'), 'Zend' => __DIR__.'/../vendor/PHPPdf/lib/vendor/Zend/library', 'ZendPdf' => __DIR__.'/../vendor/PHPPdf/lib/vendor/ZendPdf/library', AppKernel.php: ... new Ps\PdfBundle\PsPdfBundle(), ... I guess all the setting up is correctly configured, as I am not getting any "library not found" nor anything on that way... So, after all that, I am doing this in the controller: ... use Ps\PdfBundle\Annotation\Pdf; ... /** * @Pdf() * @Route ("/pdf", name="_pdf") * @Template() */ public function generateInvoicePDFAction($name = 'Pedro') { return $this->render('AcmeStoreBundle:Shop:generateInvoice.pdf.twig', array( 'name' => $name, )); } And having this twig file: <pdf> <dynamic-page> Hello {{ name }}! </dynamic-page> </pdf> Well. Somehow, what I just get in my page is just the normal html generated as if it was a normal Response rendering. The Pdf() annotation is supposed to give the "special" behavior of creating the PDF file instead of rendering normal HTML. So, having the above code, when I request the route http://www.mysite.com/*...*/pdf, all what I get is the following HTML rendered: <pdf> <dynamic-page> Hello Pedro! </dynamic-page> </pdf> (so a blank HTML page with just the words Hello Pedro! on it. Any clue? Am I doing anything wrong? Is it mandatory to have the alternative *.html.twig apart from the *.pdf.twig version? I don't think so... :(

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • Gap appears between navigation bar and view after rotating & tab switching

    - by Bogatyr
    My iphone application is showing strange behavior when rotating: a gap appears between the navigation title and content view inside a tab bar view (details on how to reproduce are below). I've created a tiny test case that exhibits the same problem: a custom root UIViewController, which creates and displays a UITabBarController programmatically, which has two tabs: 1) plain UIViewController, and 2) UINavigationController created programmatically with a single plain UIViewController content view. The complete code for the application is in the root controller's viewDidLoad (every "*VC" class is a totally vanilla UIViewController subclass with XIB for user interface from XCode, with only the view background color changed to clearly identify each view, nothing else). Here's the viewDidLoad code, and the shouldAutorotateToInterfaceOrientation code, this code is the entire application basically: - (void)viewDidLoad { [super viewDidLoad]; FirstVC *fvc = [[FirstVC alloc] initWithNibName:@"FirstVC" bundle:nil]; NavContentsVC *ncvc = [[NavContentsVC alloc] initWithNibName:@"NavContentsVC" bundle:nil]; UINavigationController *svc = [[UINavigationController alloc] initWithRootViewController:ncvc]; NSMutableArray *localControllersArray = [[NSMutableArray alloc] initWithCapacity:2]; [localControllersArray addObject:fvc]; [localControllersArray addObject:svc]; fvc.title = @"FirstVC-Title"; ncvc.title = @"NavContents-Title"; UITabBarController *tbc = [[UITabBarController alloc] init]; tbc.view.frame = CGRectMake(0, 0, 320, 460); [tbc setViewControllers:localControllersArray]; [self.view addSubview:tbc.view]; [localControllersArray release]; [ncvc release]; [svc release]; [fvc release]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } Here's how to reproduce the problem: 1) start application 2) rotate device (happens in simulator, too) to landscape (UITabBar properly rotates) 3) click on tab 2 4) rotate device to portrait -- notice gap of root view controller's background color of about 10 pixels high beneath the Navigation title bar and the Navigation content view. 5) click tab 1 6) click tab 2 And the gap is gone! From my real application, I see that the gap remains during all VC push and pops while the NavigationController tab is active. Switching away to a different tab and back to the Nav tab clears up the gap. What am I doing wrong? I'm running on SDK 3.1.3, this happens both on the simulator and on the device. Except for this particular sequence, everything seems to work fine. Help!

    Read the article

  • One entityManger finds entity , the other does not.

    - by Pitelk
    Hi all, I have a very strange behavior in my program. I have 2 classes (class LogIn and CreateGame) where i have injected an EntityManager in each using the annotation @PersistenceContext(unitName="myUnitPU") EntityManager entitymanger; In some point i remove an object called "user" from the database using entitymanger.remove(user) from a method in LogIn class. The business logic is that a user can host and join games ( in the same time) so removing the user all the entries in database about the games the user has created are removed and all the entries showing in which games the user has joined are removed also. After that, i call another function which checks if the user exists using a method in the LogIn class entitymanager.find(user) which surprisingly enough, finds the user. After that I call a method in CreateGame class which tries to find the user by using again entitymanger.find(user) the entitymanger in that class fails to find the user (which is the expected result as the user is removed and it's not in the database) So the question is : Why the entitymanager in one class finds the user (which is wrong) where the other doesn't find it? Does anyone has ever the same problem? PS : This "bug" occurs when the user has hosted a game which is joined by another user (lets call him Buser) and the Buser has made a game which is joined by the current user. GAME | HOST | CLIENTS game1 | user | userB game2 | userB | user where in this case by removing the user, the game1 is deleted and the user is removed from game2 so the result is GAME | HOST | CLIENTS game2 | userB | PS2 : The Beans are EJB3.0. The methods are called from a delegate class. The beans in the delegate class are instantiated using the InitialContext.lookup() method. Note that for logging in ,creating , joining games the appropriate delegate class calls the correspondent EJB which does the transactions. In the case of logOut, the delegate calls an EJB to logout the user but becuase other stuff must be done (as said above) this EJB calls other EJB (again using lookup() ) which has methods like removegame(), removeUserFromGame() etc. After those methods are executed the user is then logged out. Maybe it has something to do with the fact the the first entity manager is called by a delegate but the second from inside an EJb and thats why the one entitymanger can see the non-existent user while the other cannot? Also all the methods have TRANSACTIONTYPE.REQUIRED Thank you in advance

    Read the article

  • Insertions into Zipper trees on XML files in Clojure

    - by ivar
    I'm confused as how to idiomatically change a xml tree accessed through clojure.contrib's zip-filter.xml. Should be trying to do this at all, or is there a better way? Say that I have some dummy xml file "itemdb.xml" like this: <itemlist> <item id="1"> <name>John</name> <desc>Works near here.</desc> </item> <item id="2"> <name>Sally</name> <desc>Owner of pet store.</desc> </item> </itemlist> And I have some code: (require '[clojure.zip :as zip] '[clojure.contrib.duck-streams :as ds] '[clojure.contrib.lazy-xml :as lxml] '[clojure.contrib.zip-filter.xml :as zf]) (def db (ref (zip/xml-zip (lxml/parse-trim (java.io.File. "itemdb.xml"))))) ;; Test that we can traverse and parse. (doall (map #(print (format "%10s: %s\n" (apply str (zf/xml-> % :name zf/text)) (apply str (zf/xml-> % :desc zf/text)))) (zf/xml-> @db :item))) ;; I assume something like this is needed to make the xml tags (defn create-item [name desc] {:tag :item :attrs {:id "3"} :contents (list {:tag :name :attrs {} :contents (list name)} {:tag :desc :attrs {} :contents (list desc)})}) (def fred-item (create-item "Fred" "Green-haired astrophysicist.")) ;; This disturbs the structure somehow (defn append-item [xmldb item] (zip/insert-right (-> xmldb zip/down zip/rightmost) item)) ;; I want to do something more like this (defn append-item2 [xmldb item] (zip/insert-right (zip/rightmost (zf/xml-> xmldb :item)) item)) (dosync (alter db append-item2 fred-item)) ;; Save this simple xml file with some added stuff. (ds/spit "appended-itemdb.xml" (with-out-str (lxml/emit (zip/root @db) :pad true))) I am unclear about how to use the clojure.zip functions appropriately in this case, and how that interacts with zip-filter. If you spot anything particularly weird in this small example, please point it out.

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • How can I position some divs inside an unordered list so they line up with the root element of the l

    - by Ronedog
    I want to position all the divs to line up to the left on the same x coordinate so it looks nice. Notice the picture below, how based on the number of nested categories the div (and its contents) show up at slightly different x coordinates. I need to have the div's line up at exactly the same x coordinate no matter how deeply nested. Note, the bottom most category always has a div for the content, but that div has to be situated inside the last < li . I am using an unordered list to display the menu and thought the best solution would be to grab the root category (Cat 2, and mCat1) and obtain their left offset using jquery, then simply use that value to update the positioning of the div...but I couldn't seem to get it to work just right. I would appreciate any advice or help that you are willing to give. Heres the HTML <ul id="nav> <li>Cat 2 <ul> <li>sub cat2</li> </ul> </li> <li>mCat1 <ul> <li>Subcat A <ul> <li>Subcat A.1 <ul> <li>Annie</li> </ul> </li> </ul> </li> </ul> </li> </ul> Heres some jquery I tried (I have do insert the div inside this .each() loop in order to retrieve some values, but basically, this selector is grabbing the last < li in the menu tree and placing a div after it and that is the div that I want to position. the 245 value was something I was playing around with to see how I could get things to line up, and I know its out of wack, but the problem is still the same no matter what I do: $("#nav li:not(:has(li))").each(function () { var self = $(this); var position = self.offset(); var xLeft = Math.round(position.left)- 245; console.log("xLeft:", xLeft ); self.after( '<div id="' + self.attr('p_node') + '_p_cont_div" class="property_position" style="display:none; left:' + xLeft + 'px;" /> ' ); }); Heres the css: .property_position{ float:left; position: relative; top: 0px; padding-top:5px; padding-bottom:10px; }

    Read the article

  • Rtti accessing fields and properties in complex data structures

    - by Coco
    As already discussed in Rtti data manipulation and consistency in Delphi 2010 a consistency between the original data and rtti values can be reached by accessing members by using a pair of TRttiField and an instance pointer. This would be very easy in case of a simple class with only basic member types (like e.g. integers or strings). But what if we have structured field types? Here is an example: TIntArray = array [0..1] of Integer; TPointArray = array [0..1] of Point; TExampleClass = class private FPoint : TPoint; FAnotherClass : TAnotherClass; FIntArray : TIntArray; FPointArray : TPointArray; public property Point : TPoint read FPoint write FPoint; //.... and so on end; For an easy access of Members I want to buil a tree of member-nodes, which provides an interface for getting and setting values, getting attributes, serializing/deserializing values and so on. TMemberNode = class private FMember : TRttiMember; FParent : TMemberNode; FInstance : Pointer; public property Value : TValue read GetValue write SetValue; //uses FInstance end; So the most important thing is getting/setting the values, which is done - as stated before - by using the GetValue and SetValue functions of TRttiField. So what is the Instance for FPoint members? Let's say Parent is the Node for TExample class, where the instance is known and the member is a field, then Instance would be: FInstance := Pointer (Integer (Parent.Instance) + TRttiField (FMember).Offset); But what if I want to know the Instance for a record property? There is no offset in this case. So is there a better solution to get a pointer to the data? For the FAnotherClass member, the Instance would be: FInstance := Parent.Value.AsObject; So far the solution works, and data manipulation can be done by using rtti or the original types, without losing information. But things get harder, when working with arrays. Especially the second array of Points. How can I get the instance for the members of points in this case?

    Read the article

  • .NET WinForms INotifyPropertyChanged updates all bindings when one is changed. Better way?

    - by Dave Welling
    In a windows forms application, a property change that triggers INotifyPropertyChanged, will result in the form reading EVERY property from my bound object, not just the property changed. (See example code below) This seems absurdly wasteful since the interface requires the name of the changing property. It is causing a lot of clocking in my app because some of the property getters require calculations to be performed. I'll likely need to implement some sort of logic in my getters to discard the unnecessary reads if there is no better way to do this. Am I missing something? Is there a better way? Don't say to use a different presentation technology please -- I am doing this on Windows Mobile (although the behavior happens on the full framework as well). Here's some toy code to demonstrate the problem. Clicking the button will result in BOTH textboxes being populated even though one property has changed. using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace Example { public class ExView : Form { private Presenter _presenter = new Presenter(); public ExView() { this.MinimizeBox = false; TextBox txt1 = new TextBox(); txt1.Parent = this; txt1.Location = new Point(1, 1); txt1.Width = this.ClientSize.Width - 10; txt1.DataBindings.Add("Text", _presenter, "SomeText1"); TextBox txt2 = new TextBox(); txt2.Parent = this; txt2.Location = new Point(1, 40); txt2.Width = this.ClientSize.Width - 10; txt2.DataBindings.Add("Text", _presenter, "SomeText2"); Button but = new Button(); but.Parent = this; but.Location = new Point(1, 80); but.Click +=new EventHandler(but_Click); } void but_Click(object sender, EventArgs e) { _presenter.SomeText1 = "some text 1"; } } public class Presenter : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _SomeText1 = string.Empty; public string SomeText1 { get { return _SomeText1; } set { _SomeText1 = value; _SomeText2 = value; // <-- To demonstrate that both properties are read OnPropertyChanged("SomeText1"); } } private string _SomeText2 = string.Empty; public string SomeText2 { get { return _SomeText2; } set { _SomeText2 = value; OnPropertyChanged("SomeText2"); } } private void OnPropertyChanged(string PropertyName) { PropertyChangedEventHandler temp = PropertyChanged; if (temp != null) { temp(this, new PropertyChangedEventArgs(PropertyName)); } } } }

    Read the article

  • Calling an Excel Add-In method from C# application or vice versa

    - by Jude
    I have an Excel VBA add-in with a public method in a bas file. This method currently creates a VB6 COM object, which exists in a running VB6 exe/vbp. The VB6 app loads in data and then the Excel add-in method can call methods on the VB6 COM object to load the data into an existing Excel xls. This is all currently working. We have since converted our VB6 app to C#. My question is: What is the best/easiest way to mimic this behavior with the C#/.NET app? I'm thinking I may not be able to pull the data from the .NET app into Excel from the add-in method since the .Net app needs to be running with data loaded (so no using a stand-alone C# class library). Maybe we can, instead, push the data from .NET to Excel by accessing the VBA add-in method from the C# code? The following is the existing VBA method accessing the VB6 app: Public Sub UpdateInDataFromApp() Dim wkbInData As Workbook Dim oFPW As Object Dim nMaxCols As Integer Dim nMaxRows As Integer Dim j As Integer Dim sName As String Dim nCol As Integer Dim nRow As Integer Dim sheetCnt As Integer Dim nDepth As Integer Dim sPath As String Dim vData As Variant Dim SheetRange As Range Set wkbInData = wkbOpen("InData.xls") sPath = g_sPathXLSfiles & "\" 'Note: the following will bring up fpw app if not already running Set oFPW = CreateObject("FPW.CProfilesData") If oFPW Is Nothing Then MsgBox "Unable to reference " & sApp Else . . . sheetCnt = wkbInData.Sheets.Count 'get number of sheets in indata workbook For j = 2 To sheetCnt 'set counter to loop over all sheets except the first one which is not input data fields With wkbInData.Worksheets(j) Set SheetRange = .UsedRange End With With SheetRange nMaxRows = .Rows.Count 'get range of sheet(j) nMaxCols = .Columns.Count 'get range of sheet(j) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearContents 'Clears data from data range (51 Columns) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearComments End With With oFPW 'vb6 object For nRow = 2 To nMaxRows ' loop through rows sName = SheetRange.Cells(nRow, 1) 'Field name vData = .vntGetSymbol(sName, 0) 'Check if vb6 app identifies the name nDepth = .GetInputTableDepth(sName) 'Get number of data items for this field name from vb6 app nMaxCols = nDepth + 2 'nDepth=0, is single data item For nCol = 2 To nMaxCols 'loop over deep screen fields nDepth = nCol - 2 'current depth vData = .vntGetSymbol(sName, nDepth) 'Get Data from vb6 app If LenB(vData) > 0 And IsNumeric(vData) Then 'Check if data returned SheetRange.Cells(nRow, nCol) = vData 'Poke the data in Else SheetRange.Cells(nRow, nCol) = vData 'Poke a zero in End If Next 'nCol Next 'nRow End With Set SheetRange = Nothing Next 'j End If Set wkbInData = Nothing Set oFPW = Nothing Exit Sub . . . End Sub Any help would be appreciated.

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • Why don't my scrollbars work properly when programmatically hiding rows in silverlight Datagrid?

    - by Luke Vilnis
    I have a Silverlight datagrid with custom code that allows for +/- buttons on the lefthand side and can display a table with a tree structure. The +/- buttons are bound to a IsExpanded property on my ViewModelRows, as I call them. The visibility of rows is bound to an IsVisible property on the ViewModelRows which is determined based on whether or not all of the parent rows are expanded. Straightforward enough. This code works fine in that if I scroll up and down the grid with PageUp/PageDown or the arrow keys, all the right rows are hidden and everything has the right structure and I can play with the +/- buttons to my hearts content. However, the vertical scroll bar on the right hand side, although it starts off the correct size and it scrolls through the rows smoothly, when I collapse rows and then re-expand them, doesn't go back to its correct size. The scrollbar can still usually be moved around to scroll through the whole collection, but because it is too big, once the bar moves to the bottom, there are still more rows to go through and it sort of jerkily shoots all the way down to the bottom or sometimes fails to scroll at all. This is pretty hard to describe so I included a screenshot with the black lines drawn on to show the difference in scrollbar length even though the two grids have the same number of rows expanded. I think this might be a bug related to the way the Datagrid does virtualization of rows. It seems to me like it isn't properly keeping track of how tall each row is supposed to be when expansion states change. Is there a way to programmatically "poke" (read hack) it to recalculate its scrollbar size on LoadingRow or something ugly like that? I'd include a code sample but there's 2 c# files and 1 xaml file so I wanted to see if anyone else has heard of this sort of issue before I try to make it reproducible in a self-contained way. Once again, scrolling with the arrow keys works fine so I'm pretty sure the underlying logic and binding is working, there's just some issue with the row height not being calculated properly. Since I'm a new user, it won't let me use image tags so here's the link to a picture of the problem: http://img210.imageshack.us/img210/8760/messedupscrollbars.png

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • .htaccess mod_rewrite issue

    - by Orhan Toy
    Almost in any project I work on, some issues with .htaccess occur. I usually just find the easiest solution and leave it because I don't have any knowledge or understanding for Apache, servers etc. But this time I thought I would ask you guys. This is the files and folders in my (simplified) setup: /modrewrite-test .htaccess /config /inc /lib /public_html .htaccess /cms /navigation index.php edit.php /pages index.php edit.php login.php page.php The "config", "inc" and "lib" folders are meant to be "hidden" from the root of the website. I try to accomplish this by making a .htaccess-file in the root that redirects the user to "public_html". The .htacess-file contains this: RewriteEngine On RewriteRule (.*) public_html/$1 This works perfect. If I type "http://localhost/modrewrite-test/login.php" in my browser, I end up in public_html/login.php which is my intention. So this works fine. The .htaccess-file in "public_html" contains this: RewriteEngine On # Root RewriteRule ^$ page.php [L] # Login RewriteRule ^(admin)|(login)\/?$ login.php [L] # Page (if not a file/directory) RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ page.php?url=$1 [L] The first rewrite just redirects me to public_html/page.php if I try to reach "http://localhost/modrewrite-test/". The next rewrite is just for the convenience of users trying to log in - so if they try to reach "http://localhost/modrewrite-test/admin" or "http://localhost/modrewrite-test/login" they will end up at the login.php-file. The third and last rewrite handles the rest of the requests. If I try to reach "http://localhost/modrewrite-test/bla/bla/bla" it will just redirect me to public_html/page.php (with the 'url' GET-variable set) instead of finding a folder called "la", containing a folder named "bla" and etc. All of these things work perfect but a minor issues occurs when I for instance try to reach "http://localhost/modrewrite-test/cms/navigation" without a slash at the end of the URL. When I try to reach that page the browser is somehow redirected to "http://localhost/modrewrite-test/public_html/cms/navigation/". The correct page is shown but why does it get redirected and add the "public_html" part in the URL? The desired behavior is that the URL stays intact and that the page public_html/cms/navigation/index.php is shown. The files and folders in the (simplified) can be found at http://highbars.com/modrewrite-test.zip

    Read the article

  • Adding a valuetype to IDL, compile and it fails with "No factory found"

    - by jim
    I can't figure out why the client keeps complaining about the not finding the factory method. I've tried the IDL with and without the "factory" keyword and that didn't change the behavior. The SDMGeoVT IDL matches other objects used (which run successfully). The SDMGeoVT classes generated match other generated classes in regards to inheritance and methods. The IDL is as follows; The idlj compiler runs w/o error. I implement the function on the server and I see the server code run and serialize the object over the wire (the server code runs fine). The client bombs with the following stack trace (the first couple of lines is from the jacORB library). I've created a small app just to compile and test the code (ArrayClient & ArrayServer). The base app (from the jacORB demo) works fine. I've tried using the base class OFBaseVT and a single object (SDMGeoVT vs the list return) and have the same issue. 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) org.omg.CORBA.MARSHAL: No factory found for: IDL:pl/SDMGeoVT:1.0 at org.jacorb.orb.CDRInputStream.read_untyped_value(CDRInputStream.java:2906) at org.jacorb.orb.CDRInputStream.read_typed_value(CDRInputStream.java:3082) at org.jacorb.orb.CDRInputStream.read_value(CDRInputStream.java:2679) at com.helloworld.pl.SDMGeoVTHelper.read(SDMGeoVTHelper.java:106) at com.helloworld.pl.SDMGeoVTListHelper.read(SDMGeoVTListHelper.java:51) at com.helloworld.pl._PLManagerIFStub.getSDMGeos(_PLManagerIFStub.java:28) at com.helloworld.ArrayClient.<init>(ArrayClient.java:40) at com.helloworld.ArrayClient.main(ArrayClient.java:125) valuetype SDMGeoVT : common::OFBaseVT{ private string sdmName; private string zip; private string atz; private long long primaryDeptId; private string deptName; factory instance(in string name,in string ZIP,in string ATZ,in long long primaryDeptId,in string deptName,in string name); string getZIP(); void setZIP(in string ZIP); string getATZ(); void setATZ(in string ATZ); long long getPrimaryDeptId(); void setPrimaryDeptId(in long long primaryDeptId); string getDeptName(); void setDeptName(in string deptName); }; typedef sequence<SDMGeoVT> SDMGeoVTList; interface PLManagerIF : PublicManagerIF { pl::SDMGeoVTList getSDMGeos(in framework::ITransactionHandle tHandle, in long long productionLocationId); };

    Read the article

  • Issues in Ada Concurrency

    - by Arkapravo
    Hi I need some help and also some insight. This is a program in Ada-2005 which has 3 tasks. The output is 'z'. If the 3 tasks do not happen in the order of their placement in the program then output can vary from z = 2, z = 1 to z = 0 ( That is easy to see in the program, mutual exclusion is attempted to make sure output is z = 2). WITH Ada.Text_IO; USE Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Integer_Text_IO; WITH System; USE System; procedure xyz is x : Integer := 0; y : Integer := 0; z : Integer := 0; task task1 is pragma Priority(System.Default_Priority + 3); end task1; task task2 is pragma Priority(System.Default_Priority + 2); end task2; task task3 is pragma Priority(System.Default_Priority + 1); end task3; task body task1 is begin x := x + 1; end task1; task body task2 is begin y := x + y; end task2; task body task3 is begin z := x + y + z; end task3; begin Put(" z = "); Put(z); end xyz; I first tried this program (a) without pragmas, the result : In 100 tries, occurence of 2: 86, occurence of 1: 10, occurence of 0: 4. Then (b) with pragmas, the result : In 100 tries, occurence of 2: 84, occurence of 1 : 14, occurence of 0: 2. Which is unexpected as the 2 results are nearly identical. Which means pragmas or no pragmas the output has same behavior. Those who are Ada concurrency Gurus please shed some light on this topic. Alternative solutions with semaphores (if possible) is also invited. Further in my opinion for a critical process (that is what we do with Ada), with pragmas the result should be z = 2, 100% at all times, hence or otherwise this program should be termed as 85% critical !!!! (That should not be so with Ada)

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >