Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 671/728 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • Make Bluetooth on Android 2.1 discoverable indefinitely

    - by kanov-baekonfat
    Hello all. I'm working on a research project which involves Bluetooth and the Android OS. I need to make Bluetooth discoverable indefinitely in order for the project to continue. The Problem: Android limits discoverability to 300 seconds. I cannot ask the user every 300 seconds to turn discoverability back on as my application is designed to run in the background without disturbing the user. As far as I am aware, there is no way to increase the time though Android's GUI. Some sources have called this a safety feature, others have called this a bug. There may be a bit of truth in both... What I'm Trying / Have Tried: I'm trying to edit a stable release of cyanogenmod to turn the discoverability timer off (it's possible; there's a configuration file that needs to have a single number changed). This isn't working because I'm having verification problems with the resulting package. During the past week, I downloaded the cyanogenmod source code, changed a relevant class in the hope that it would make Bluetooth discoverable indefinitely, and tried to recompile. This did not work because (a) the repo is frequently changed, leading to an unstable code base which fails to compile (OR, it could be that I'm using it incorrectly; just because it looked like it was the code's fault in many instances doesn't mean I should blame it for all the problems I encountered!) and (b) the repo decides to periodically "ignore" me (but not always, as I have gotten the code base before!), replying to my synchronization/connection attempts with: fatal: The remote end hung up unexpectedly As you might imagine, the above two issues are problematic and very frustrating to deal with. More Info: I'm running Android 2.1 via cyanogenmod (v5 I believe). This means the phone is also rooted. I have a developer phone, which means that the bootloader is unlocked. My phone is an HTC Magic (32B). The Big Question: How can I make Bluetooth indefinitely discoverable on Android? Thanks for your time and input. I feel like I'm spinning my tires on this issue and I'd like to move past it.

    Read the article

  • Running Command via Java ProccesBuilder Different to Running the same in the Shell

    - by Tom Duckering
    I'm tearing my hair out trying to work out why the command I'm executing via Java using a ProcessBuilder & Process is not working. I run the "same" command at the Windows command line and it works as expected. It must be that they're not the same but I can't for the life of me work out why. The command is this: ccm start -nogui -m -q -n ccm_admin -r developer -d /path/to/db/databasename -s http://hostname:8400 -pw Passw0rd789$ The output is should be a single line string that I need to grab and set as an environment variable (hence the v. basic use of the BufferedReader). My Java code, which when it runs the command gets an application error, looks like this with entry point being startCCMAndGetCCMAddress(): private static String ccmAddress = ""; private static final String DATABASE_PATH = "/path/to/db/databasename"; private static final String SYNERGY_URL = "http://hostname:8400"; private static final String USERNAME = "ccm_admin"; private static final String PASSWORD = "Passw0rd789$"; private static final String USER_ROLE = "developer"; public static List<String> getCCMStartCommand() { List<String> command = new ArrayList<String>(); command.add("cmd.exe"); command.add("/C"); command.add("ccm"); command.add("start"); command.add("-nogui"); command.add("-m"); command.add("-q"); command.add("-n "+USERNAME); command.add("-r "+USER_ROLE); command.add("-d "+DATABASE_PATH); command.add("-s "+SYNERGY_URL); command.add("-pw "+PASSWORD); return command; } private static String startCCMAndGetCCMAddress() throws IOException, CCMCommandException { int processExitValue = 0; List<String> command = getCCMStartCommand(); System.err.println("Will run: "+command); ProcessBuilder procBuilder = new ProcessBuilder(command); procBuilder.redirectErrorStream(true); Process proc = procBuilder.start(); BufferedReader outputBr = new BufferedReader(new InputStreamReader(proc.getInputStream())); try { proc.waitFor(); } catch (InterruptedException e) { processExitValue = proc.exitValue(); } String outputLine = outputBr.readLine(); outputBr.close(); if (processExitValue != 0) { throw new CCMCommandException("Command failed with output: " + outputLine); } if (outputLine == null) { throw new CCMCommandException("Command returned zero but there was no output"); } return outputLine; } The output of the System.err.println(...) is: Will run: [cmd.exe, /C, ccm, start, -nogui, -m, -q, -n ccm_admin, -r developer, -d /path/to/db/databasename, -s http://hostname:8400, -pw Passw0rd789$]

    Read the article

  • mysqli query for no results do something else do something else

    - by Yousef Altaf
    I'm running a mysqli query. this is my code <?php $SM_pro_info="SELECT * FROM special_marketing_ads ORDER BY id desc LIMIT 4"; $QSM_pro_info = $db->query($SM_pro_info)or die($db->error); if($QSM_pro_info->num_rows ==1){ while($SM_pro=$QSM_pro_info->fetch_object()){ ?> <table width="208" border="0"> <tr> <td width="129" height="35" align="right"><span style="color:#361800; font-size:14px; font-weight:bold;"><?php echo $SM_pro->pro_title; ?></span></td> <td width="69" rowspan="2" align="center"><a rel="lightbox" href="includes/Cpanel/projectImages//images.jpg" ><img src="<?php echo $SM_pro->image_1; ?>" alt="" width="60" height="60" border="0" /></a></td> </tr> <tr align="right"> <td><span style="color:#361800; font-size:14px; font-weight:bold;">?????</span> : <span style="color:#da6e19; font-size:15px; font-weight:bold;"><?php echo $SM_pro->pro_purpose; ?></span></td> </tr> </table> <?php } }else{ echo"Your Ads here"; } ?> now all I want to do is if there is any results comeing from mysqli echo it if there is no results it should echo an image which is add your ads here. the point is I have 4 Ads so if there is 1 Ads in mysqli row and the others is empty so I want it to echo this single Ads and the other 3 Ads should be this image add your Ads here. any Help on this please?

    Read the article

  • Client-side templating frameworks to streamline using jQuery with REST/JSON

    - by Tauren
    I'm starting to migrate some html generation tasks from a server-side framework to the client. I'm using jQuery on the client. My goal is to get JSON data via a REST api and use this data to populate HTML into the page. Right now, when a user on my site clicks a link to My Projects, the server generates HTML like this: <dl> <dt>Clean Toilet</dt> <dd>Get off your butt and clean this filth!</dd> <dt>Clean Car</dt> <dd>I think there's something growing in there...</dd> <dt>Replace Puked on Baby Sheets</dt> </dl> I'm changing this so that clicking My Projects will now do a GET request that returns something like this: [ { "name":"Clean Car", "description":"I think there's something growing in there..." }, { "name":"Clean Toilets", "description":"Get off your butt and clean this filth!" }, { "name":"Replace Puked on Baby Sheets" } ] I can certainly write custom jQuery code to take that JSON and generate the HTML from it. This is not my question, and I don't need advice on how to do that. What I'd like to do is completely separate the presentation and layout from the logic (jquery code). I don't want to be creating DL, DT, and DD elements via jQuery code. I'd rather use some sort of HTML templates that I can fill the data in to. These templates could simply be HTML snippets that are hidden in the page that the application was loaded from. Or they could be dynamically loaded from the server (to support user specific layouts, i18n, etc.). They could be displayed a single time, as well as allow looping and repeating. Perhaps it should support sub-templates, if/then/else, and so forth. I have LOTS of lists and content on my site that are presented in many different ways. I'm looking to create a simple and consistent way to generate and display content without creating custom jQuery code for every different feature on my site. To me, this means I need to find or build a small framework on top of jQuery (probably as a plugin) that meets these requirements. The only sort of framework that I've found that is anything like this is jTemplates. I don't know how good it is, as I haven't used it yet. At first glance, I'm not thrilled by it's template syntax. Anyone know of other frameworks or plugins that I should look into? Any blog posts or other resources out there that discuss doing this sort of thing? I just want to make sure that I've surveyed everything out there before building it myself. Thanks!

    Read the article

  • Showing login view controller before main tab bar controller

    - by Padawan
    I'm creating an iPad app with a tab bar controller that requires login. So on launch, I want to show a LoginViewController and if login is successful, then show the tab bar controller. This is how I implemented an initial test version (left out some typical header stuff, etc)... AppDelegate.h: @interface AppDelegate_Pad : NSObject <UIApplicationDelegate, LoginViewControllerDelegate> { UIWindow *window; UITabBarController *tabBarController; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UITabBarController *tabBarController; @end AppDelegate.m: @implementation AppDelegate_Pad @synthesize window; @synthesize tabBarController; - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { LoginViewController_Pad *lvc = [[LoginViewController_Pad alloc] initWithNibName:@"LoginViewController_Pad" bundle:nil]; lvc.delegate = self; [window addSubview:lvc.view]; //[lvc release]; [window makeKeyAndVisible]; return YES; } - (void)loginViewControllerDidFinish:(LoginViewController_Pad *)loginViewController { [window addSubview:tabBarController.view]; } - (void)dealloc {...} @end LoginViewController_Pad.h: @protocol LoginViewControllerDelegate; @interface LoginViewController_Pad : UIViewController { id<LoginViewControllerDelegate> delegate; } @property (nonatomic, assign) id <LoginViewControllerDelegate> delegate; - (IBAction)buttonPressed; @end @protocol LoginViewControllerDelegate -(void)loginViewControllerDidFinish:(LoginViewController_Pad *)loginViewController; @end LoginViewController_Pad.m: @implementation LoginViewController_Pad @synthesize delegate; ... - (IBAction)buttonPressed { [self.view removeFromSuperview]; [self.delegate loginViewControllerDidFinish:self]; } ... @end So the app delegate adds the login view controller's view on launch and waits for login to call "did finish" using a delegate. The login view controller calls removeFromSuperView before it calls didFinish. The app delegate then calls addSubView on the tab bar controller's view. If you made it up to this point, thanks, and I have three questions: MAIN QUESTION: Is this the right way to show a view controller before the app's main tab bar controller is displayed? Even though it seems to work, is it a proper way to do it? If I comment out the "lvc release" in the app delegate then the app crashes with EXC_BAD_ACCESS when the button on the login view controller is pressed. Why? With the "lvc release" commented out everything seems to work but on the debugger console it writes this message when the app delegate calls addSubView for the tab bar controller: Using two-stage rotation animation. To use the smoother single-stage animation, this application must remove two-stage method implementations. What does that mean and do I need to worry about it?

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • WPF ListView's GridView item expansion

    - by NT_
    Is it possible for a WPF ListView that uses a GridView view (ListView.View property) to have one of its items 'expanded' i.e. create some control underneath the item. I cannot simply add another item as it will assume the GridView item template, i.e. appear with columns rather than being a single usable area. This is how my list view currently looks like, it just has two columns: <ListView x:Name="SomeName" Style="{DynamicResource NormalListView}" > <ListView.View> <GridView ColumnHeaderContainerStyle="{DynamicResource NormalListViewHeader}"> <GridViewColumn Width="140" x:Name="Gvc_Name"> <GridViewColumn.CellTemplate> <DataTemplate> <Border Style="{DynamicResource ListViewCellSeparatorBorder}" > <StackPanel Orientation="Vertical" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <TextBlock Text="{Binding Path=Name}" FontWeight="Bold" HorizontalAlignment="Left" /> <TextBlock Text="{Binding Path=Type}" HorizontalAlignment="Left" /> </StackPanel> </Border> </DataTemplate> </GridViewColumn.CellTemplate> <Border Style="{DynamicResource ListViewHeaderBorderContainer}"> <TextBlock Style="{DynamicResource ListViewHeaderText}" Text="Name"/> </Border> </GridViewColumn> <GridViewColumn Width="120" x:Name="Gvc_Timestamp"> <GridViewColumn.CellTemplate> <DataTemplate> <Border Style="{DynamicResource ListViewCellSeparatorBorder}"> <StackPanel Orientation="Vertical" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <TextBlock Text="{Binding Path=TimestampDate}" HorizontalAlignment="Center" /> <TextBlock Text="{Binding Path=TimestampTime}" FontWeight="Bold" HorizontalAlignment="Center" /> </StackPanel> </Border> </DataTemplate> </GridViewColumn.CellTemplate> <Border Style="{DynamicResource ListViewHeaderBorderContainer}"> <TextBlock Style="{DynamicResource ListViewHeaderText}" Text="Processed"/> </Border> </GridViewColumn> </GridView> </ListView.View> </ListView> Many thanks!

    Read the article

  • NSFetchedResultsController - Delegate methods crashing under iPhone OS 3.0, but NOT UNDER 3.1

    - by Scott Langendyk
    Hey guys, so I've got my NSFetchedResultsController working fine under the 3.1 SDK, however I start getting some weird errors, specifically in the delegate methods when I try it under 3.0. I've determined that this is related to the NSFetchedResultsControllerDelegate methods. This is what I have set up. The inEditingMode stuff has to do with the way I've implemented adding another static section to the table. - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type{ NSIndexSet *sectionSet = [NSIndexSet indexSetWithIndex:sectionIndex]; if(self.inEditingMode){ sectionSet = [NSIndexSet indexSetWithIndex:sectionIndex + 1]; } switch (type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:sectionSet withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:sectionSet withRowAnimation:UITableViewRowAnimationFade]; break; default: [self.tableView reloadData]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath{ NSIndexPath *relativeIndexPath = indexPath; NSIndexPath *relativeNewIndexPath = newIndexPath; if(self.inEditingMode){ relativeIndexPath = [NSIndexPath indexPathForRow:indexPath.row inSection:indexPath.section + 1]; relativeNewIndexPath = [NSIndexPath indexPathForRow:newIndexPath.row inSection:newIndexPath.section + 1]; } switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:relativeNewIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:relativeIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; default: [self.tableView reloadData]; break; } } -(void)controllerDidChangeContent:(NSFetchedResultsController *)controller{ [self.tableView endUpdates]; } When I add an entity to the managed object context, I get the following error: Serious application error. Exception was caught during Core Data change processing: *** -[NSCFArray objectAtIndex:]: index (1) beyond bounds (1) with userInfo (null) I put a breakpoint on objc_exception_throw, and the crash seems to be occuring inside of controllerDidChangeContent. If I comment out all of the self.tableView methods, and put a single [self.tableView reloadData] inside of controllerDidChangeContent, everything works as expected. Anybody have any idea as to why this is happening?

    Read the article

  • MooseX::Types declaration issue, tight test case :)

    - by TJ Thompson
    So after an embarrassing amount of time debugging, I've finally stripped this issue ([http://stackoverflow.com/questions/4621589/perl-moose-typedecorator-error-how-do-i-debug][1]) down to a simple test case. I would humbly request some help understanding why it's failing :) Here is the error message I'm getting: plxc16479 $h2/tmp/tmp18.pl This method [new] requires a single argument. at /nfs/pdx/disks/nehalem.pde.077/perl/5.12.2/lib64/site_perl/MooseX/Types/TypeDecorator.pm line 91 MooseX::Types::TypeDecorator::new('MooseX::Types::TypeDecorator=HASH(0x655b90)') called at /nfs/pdx/disks/nehalem.pde.077/projects/lib/Program-Plist-Pl/lib/Program/Plist/Pl.pm line 10 Program::Plist::Pl::BUILD('Program::Plist::Pl=HASH(0x63d478)', 'HASH(0x63d220)') called at generated method (unknown origin) line 29 Program::Plist::Pl::new('Program::Plist::Pl') called at /nfs/pdx/disks/nehalem.pde.077/tmp/tmp18.pl line 10 Wrapper test script: use strict; use warnings; BEGIN {push(@INC, split(':', $ENV{PERL_TEST_LIBS}))}; use Program::Plist::Pl; my $obj = Program::Plist::Pl->new(); Program::Plist::Pl file: package Program::Plist::Pl; use Moose; use namespace::autoclean; use Program::Types qw(Pattern); # <-- Removing this fixes error use Program::Plist::Pl::Pattern; sub BUILD { my $pattern_obj = Program::Plist::Pl::Pattern->new(); } __PACKAGE__->meta->make_immutable; 1; Program::Types file: package Program::Types; use MooseX::Types -declare => [qw(Pattern)]; class_type Pattern, {class => 'Program::Plist::Pl::Pattern'}; 1; And the Program::Plist::Pl::Pattern file: package Program::Plist::Pl::Pattern; use Moose; use namespace::autoclean; __PACKAGE__->meta->make_immutable; 1; Notes: While I don't need the Pattern type from Program::Types in the above code, I do in other code that is stripped out. The PERL_TEST_LIBS env var I'm pulling INC paths from only contains paths to the project modules. There are no other modules loaded from these paths. It appears the MooseX::Types definition for Pattern is causing problems, but I'm not sure why. Documentation shows the syntax I am using, but it's possible I'm misusing class_type as there isn't much said about it. Intent is to be able to use Pattern for type checking via MooseX::Params::Validate to verify the argument is a 'Program::Plist::Pl::Program' object. I've found that removing the intervening class Program::Plist::Pl from the equation by directly calling Pattern-new from the tmp18.pl wrapper results in no error, even when the Program::Types Pattern type is imported.

    Read the article

  • DUnit: How to run tests?

    - by Ian Boyd
    How do i run TestCase's from the IDE? i created a new project, with a single, simple, form: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) private public end; var Form1: TForm1; implementation {$R *.DFM} end. Now i'll add a test case to check that pushing Button1 does what it should: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) Button1: TButton; procedure Button1Click(Sender: TObject); private public end; var Form1: TForm1; implementation {$R *.DFM} uses TestFramework; type TForm1Tests = class(TTestCase) private f: TForm1; protected procedure SetUp; override; procedure TearDown; override; published procedure TestButton1Click; end; procedure TForm1.Button1Click(Sender: TObject); begin //todo end; { TForm1Tests } procedure TForm1Tests.SetUp; begin inherited; f := TForm1.Create(nil); end; procedure TForm1Tests.TearDown; begin f.Free; inherited; end; procedure TForm1Tests.TestButton1Click; begin f.Button1Click(nil); Self.CheckEqualsString('Hello, world!', f.Caption); end; end. Given what i've done (test code in the GUI project), how do i now trigger a run of the tests? If i push F9 then the form simply appears: Ideally there would be a button, or menu option, in the IDE saying Run DUnit Tests: Am i living in a dream-world? A fantasy land, living in a gumdrop house on lollipop lane?

    Read the article

  • Tiling rectangles seamlessly in WPF

    - by Joe White
    I want to seamlessly tile a bunch of different-colored Rectangles in WPF. That is, I want to put a bunch of rectangles edge-to-edge, and not have gaps between them. If everything is aligned to pixels, this works fine. But I also want to support arbitrary zoom, and ideally, I don't want to use SnapsToDevicePixels (because it would compromise quality when the image is zoomed way out). But that means my Rectangles sometimes render with gaps. For example: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Background="Black"> <Canvas SnapsToDevicePixels="False"> <Canvas.RenderTransform> <ScaleTransform ScaleX="0.5" ScaleY="0.5"/> </Canvas.RenderTransform> <Rectangle Canvas.Left="25" Width="100" Height="100" Fill="#CFC"/> <Rectangle Canvas.Left="125" Width="100" Height="100" Fill="#CCF"/> </Canvas> </Page> If the ScaleTransform's ScaleX is 1, then the Rectangles fit together seamlessly. When it's 0.5, there's a dark gray streak between them. I understand why -- the combined semi-transparent edge pixels don't combine to be 100% opaque. But I would like a way to fix it. I could always just make the Rectangles overlap, but I won't always know in advance what patterns they'll be in (this is for a game that will eventually support a map editor). Besides, this would cause artifacts around the overlap area when things were zoomed way in (unless I did bevel-cut angles on the underlapping portion, which is an awful lot of work, and still causes problems at corners). Is there some way I can combine these Rectangles into a single combined "shape" that does render without internal gaps? I've played around with GeometryDrawing, which does exactly that, but then I don't see a way to paint each RectangleGeometry with a different-colored brush. Are there any other ways to get shapes to tile seamlessly under an arbitrary transform, without resorting to SnapsToDevicePixels?

    Read the article

  • Global Entity Framework Context in WPF Application

    - by OffApps Cory
    Good day, I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally? For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right? If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated. I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data. Update: OK so I have changed my app as follows: All contexts are created in Using Blocks to dispose of them after they are no longer needed. When loaded, all entities are detatched from context before it is disposed. A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method. After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null). This has let to the current problem: When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem. My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work. Any help would elevate you to near god-hood.

    Read the article

  • NSXMLParser Memory Allocation Efficiency for the iPhone

    - by Staros
    Hello, I've recently been playing with code for an iPhone app to parse XML. Sticking to Cocoa, I decided to go with the NSXMLParser class. The app will be responsible for parsing 10,000+ "computers", all which contain 6 other strings of information. For my test, I've verified that the XML is around 900k-1MB in size. My data model is to keep each computer in an NSDictionary hashed by a unique identifier. Each computer is also represented by a NSDictionary with the information. So at the end of the day, I end up with a NSDictionary containing 10k other NSDictionaries. The problem I'm running into isn't about leaking memory or efficient data structure storage. When my parser is done, the total amount of allocated objects only does go up by about 1MB. The problem is that while the NSXMLParser is running, my object allocation is jumping up as much as 13MB. I could understand 2 (one for the object I'm creating and one for the raw NSData) plus a little room to work, but 13 seems a bit high. I can't imaging that NSXMLParser is that inefficient. Thoughts? Code... The code to start parsing... NSXMLParser *parser = [[NSXMLParser alloc] initWithData: data]; [parser setDelegate:dictParser]; [parser parse]; output = [[dictParser returnDictionary] retain]; [parser release]; [dictParser release]; And the parser's delegate code... -(void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { if(mutableString) { [mutableString release]; mutableString = nil; } mutableString = [[NSMutableString alloc] init]; } -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.mutableString) { [self.mutableString appendString:string]; } } -(void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"size"]){ //The initial key, tells me how many computers returnDictionary = [[NSMutableDictionary alloc] initWithCapacity:[mutableString intValue]]; } if([elementName isEqualToString:hashBy]){ //The unique identifier if(mutableDictionary){ [mutableDictionary release]; mutableDictionary = nil; } mutableDictionary = [[NSMutableDictionary alloc] initWithCapacity:6]; [returnDictionary setObject:[NSDictionary dictionaryWithDictionary:mutableDictionary] forKey:[NSMutableString stringWithString:mutableString]]; } if([fields containsObject:elementName]){ //Any of the elements from a single computer that I am looking for [mutableDictionary setObject:mutableString forKey:elementName]; } } Everything initialized and released correctly. Again, I'm not getting errors or leaking. Just inefficient. Thanks for any thoughts!

    Read the article

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • Populating ComboBoxDataColumn items and values

    - by MarceloRamires
    I have a "populate combobox", and I'm so happy with it that I've even started using more comboboxes. It takes the combobox object by reference with the ID of the "value set" (or whatever you want to call it) from a table and adds the items and their respective values (which differ) and does the job. I've recently had the brilliant idea of using comboboxes in a gridview, and I was happy to notice that it worked JUST LIKE a single combobox, but populating all the comboboxes in the given column at the same time. ObjComboBox.Items.Add("yadayada"); //works just like ObjComboBoxColumn.Items.Add("blablabla"); But When I started planning how to populate these comboboxes I've noticed: There's no "Values" property in ComboBoxDataColumn. ObjComboBox.Values = whateverArray; //works, but the following doesn't ObjComboBoxColumn.Values = whateverArray; Questions: 0 - How do I populate it's values ? (I suspect it's just as simple, but uses another name) 1 - If it works just like a combobox, what's the explanation for not having this attribute ? -----[EDIT]------ So I've checked out Charles' quote, and I've figured I had to change my way of populating these bad boys. Instead of looping through the strings and inserting them one by one in the combobox, I should grab the fields I want to populate in a table, and set one column of the table as the "value", and other one as the "display". So I've done this: ObjComboBoxColumn.DataSource = DTConfig; //Double checked, guaranteed to be populated ObjComboBoxColumn.ValueMember = "Code"; ObjComboBoxColumn.DisplayMember = "Description"; But nothing happens, if I use the same object as so: ObjComboBoxColumn.Items.Add("StackOverflow"); It is added. There is no DataBind() function. It finds the two columns, and that's guaranteed ("Code" and "Description") and if I change their names to nonexistant ones it gives me an exception, so that's a good sign. -----[EDIT]------ I have a table in SQL Server that is something like code  |  text —————    1    | foo    2    | bar It's simple, and with other comboboxes (outside of gridviews) i've successfully populated looping through the rows and adding the texts: ObjComboBox.Items.Add(MyDataTable.Rows[I]["MyColumnName"].ToString()); And getting every value, adding it into an array, and setting it like: ObjComboBox.Values = MyArray; I'd like to populate my comboboxColumns just as simply as I do with comboboxes.

    Read the article

  • From a 3D modeler to an iPhone app - what are best practices?

    - by bonkey
    I am quite new in 3D programming on iPhone and I would like to ask for hints about organizing a work between designers and programmers on that platform. Most of all: what kind of tools, libraries or plugins cooperate the best on both sides. Although I consider the question as looking for general best-practices advice I would like to find a solution for my current situation which I describe further, too. I've already done some research and found following libraries: SIO2 Khronos OpenGL ES 1.x SDK for PowerVR MBX Unity3D Oolong Game Engine I've checked modellers or plugins to them giving output formats readable by those tools: obj2opengl Wavefront OBJ to plain header file converter Blender with SIO2 exporter iphonewavefrontloader Cheetah3D PVRGeoPOD for 3DS / Maya Unfortunately I still have no clear vision how to combine any of that tools to get a desinger's work in an application. I look for a way of getting it in the most possible complete way: models, lights, scenes, textures, maybe some simple animations (but rather no game-like physics), but I still got nothing. And here comes my situation: I would like to find right way to present few (but quite complicated) models from a single scene. The designers mostly use 3DS Max 9, sometimes 10 (which partly prevents using PVRGeoPOD) and are rather reluctant to switch to something else but if there's no other choice I suppose it would be possible. The basic rule I've already found in some places "use Wavefront OBJ" not always works. I haven't got any acceptable results with production files, actually. The only things worked fine were some mere examples. Some of my models did imported incomplete, sometimes exporters hung or generated enormous files not really useful on an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just crashed an app. I know it might be a problem with too complicated models or my mistakes coming from inexeperience but I am not able to find any guidelines for that process to have streamlined cooperation with designers. I am even willing to write some things from scratch in pure OpenGL-ES if it's necessary, but I would like to avoid what might be avoided and get the most from the model files. The best would be the effect I saw on some SIO2 tutorials: export, build & go. But at that moment I've got only "import, wrong", "import, where are textures?", "import, that almost looks fine, export, hang" and so on... Is it really so much frustrating or I am just missed something obvious? Can anybody share his/her experience in that field and tell what kind of software uses for "making things happen"?

    Read the article

  • Stringtemplate: is it ok to Apply Templates, in which HashMap uses, To Multi-Valued Attributes

    - by user1071830
    There is two template in my .stg file, and both of them apply on multi-value a HashMap. The HashMap is employed as an injected object. And I need those instance of HashMap can be injected for many times. My trouble is, when I switch to another template, ANTLR seems to consider the second HashMap as a List -- multipul objects and null value. Part of my .stg file shows as follows: tpl_hash(BAR, FOO) ::= << <FOO:foo(); separator="\n"> <BAR:bar(); separator="\n"> >> foo(nn) ::= << foo: <nn.name; null="NULL"> . <nn.national; null="NULL"> >> bar(mm) ::= << bar: <mm.name> @ <mm.national> >> Part of my .g file shows: HashMap hm = new HashMap(); hm.put("name", $name.text); hm.put("national", "German"); tpl_hash.add("FOO",new HashMap(hm)); HashMap hm2 = new HashMap(); hm2.put("name", $name.text); hm2.put("national", "German"); tpl_hash.add("BAR",new HashMap(hm2)); The result I expect is : bar: Kant @ German foo: Russell @ England But, I got: foo: NULL . NULL foo: NULL . NULL bar: @ bar: @ If we replace BAR with FOO, as is, keeping FOO and BAR with identical template, the output is right, like the following. bar: Russell @ German bar: Russell @ German In docs, "synchronized ST add (String name, Object value) in org.stringtemplate.v4.ST" said: "If you send in a List and then inject a single value element, add() copies original list and adds the new value." What about a HashMap? Does ANTLR consider the HashMap, key/value pair access, an object purposely, as a List and as multi-value injected by mistake? Thanks a lot in advance.

    Read the article

  • Lucene.NET search index approach

    - by Tim Peel
    Hi, I am trying to put together a test case for using Lucene.NET on one of our websites. I'd like to do the following: Index in a single unique id. Index across a comma delimitered string of terms or tags. For example. Item 1: Id = 1 Tags = Something,Separated-Term I will then be structuring the search so I can look for documents against tag i.e. tags:something OR tags:separate-term I need to maintain the exact term value in order to search against it. I have something running, and the search query is being parsed as expected, but I am not seeing any results. Here's some code. My parser (_luceneAnalyzer is passed into my indexing service): var parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_CURRENT, "Tags", _luceneAnalyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); return parser; My Lucene.NET document creation: var doc = new Document(); var id = new Field( "Id", NumericUtils.IntToPrefixCoded(indexObject.id), Field.Store.YES, Field.Index.NOT_ANALYZED, Field.TermVector.NO); var tags = new Field( "Tags", string.Join(",", indexObject.Tags.ToArray()), Field.Store.NO, Field.Index.ANALYZED, Field.TermVector.YES); doc.Add(id); doc.Add(tags); return doc; My search: var parser = BuildQueryParser(); var query = parser.Parse(searchQuery); var searcher = Searcher; TopDocs hits = searcher.Search(query, null, max); IList<SearchResult> result = new List<SearchResult>(); float scoreNorm = 1.0f / hits.GetMaxScore(); for (int i = 0; i < hits.scoreDocs.Length; i++) { float score = hits.scoreDocs[i].score * scoreNorm; result.Add(CreateSearchResult(searcher.Doc(hits.scoreDocs[i].doc), score)); } return result; I have two documents in my index, one with the tag "Something" and one with the tags "Something" and "Separated-Term". It's important for the - to remain in the terms as I want an exact match on the full value. When I search with "tags:Something" I do not get any results. Question What Analyzer should I be using to achieve the search index I am after? Are there any pointers for putting together a search such as this? Why is my current search not returning any results? Many thanks

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • Asynchronous subprocess on Windows

    - by Stigma
    First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring. I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR. pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # ... some more processes make up the chain, but that is irrelevant to the problem pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout ) Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either. Thus google brought me to this asynchronous subprocess snippet on ActiveState. All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example. Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :) My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)

    Read the article

  • Properly setting up willSelectRowAtIndexPath and didSelectRowAtIndexPath to send cell selections

    - by Gordon Fontenot
    Feel like I'm going a bit nutty here. I have a detail view with a few stand-alone UITextFields, a few UITextFields in UITAbleViewCells, and one single UITableViewCell that will be used to hold notes, if there are any. I only want this cell selectable when I am in edit mode. When I am not in edit mode, I do not want to be able to select it. Selecting the cell (while in edit mode) will fire a method that will init a new view. I know this is very easy, but I am missing something somewhere. Here are the current selection methods I am using: -(NSIndexPath *)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { if (!self.editing) { NSLog(@"Returning nil, not in edit mode"); return nil; } NSLog(@"Cell will be selected, not in edit mode"); if (indexPath.section == 0) { NSLog(@"Comments cell will be selected"); return indexPath; } return nil; } -(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { if (!self.editing) { NSLog(@"Not in edit mode. Should not have made it this far."); return; } if (indexPath.section == 0) [self pushCommentsView]; else return; } My problem is really 2 fold; 1) Even when I'm not in edit mode, and I know I am returning nil (due to the NSLog message), I can still select the row (it flashes blue). From my understanding of the willSelectRowAtIndexPath method, this shouldn't be happening. Maybe I am wrong about this? 2) When I enter edit mode, I can't select anything at all. the willSelectRowAtIndexPath method never fires, and neither does the didSelectRowAtIndexPath. The only thing I am doing in the setEditing method, is hiding the back button while editing, and assigning firstResponder to the top textField to get the keyboard to pop up. I thought maybe the first responder was getting in the way of the click (which would be dumb), but even with that commented out, I cannot perform the cell selection during editing.

    Read the article

  • What am I missing about WCF?

    - by Bigtoe
    I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release. However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML. It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site. I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism. I suppose what I'm asking is, Am I looking at WCF the wrong way? What strengths does it have over the alternatives? Under what circumstances should I choose to use WCF? OK Folks, Sorry about the delay in responding, work does have a nasty habbit of get in the way somethimes :) Some clarifications My main paint point with WCF I suppose falls down into the following areas While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden Size of string than can be passed can't be over 8K Number of objects that can be passed in a single message is restricted Proxies not automatically recovering from failures The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it. Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembley, but it's full of rubbish and certain code generators choke on it. I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading. I just think there should be simplier configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rahter than just the web.config/app.config files). OK, back to the daily grid. Thanks for all your replies so far. Kind Regards Noel

    Read the article

  • Is Ogre's use of Exceptions a good way of using them?

    - by identitycrisisuk
    I've managed to get through my C++ game programming career so far virtually never touching exceptions but recently I've been working on a project with the Ogre engine and I'm trying to learn properly. I've found a lot of good questions and answers here on the general usage of C++ exceptions but I'd like to get some outside opinions from here on whether Ogre's usage is good and how best to work with them. To start with, quoting from Ogre's documentation of it's own Exception class: OGRE never uses return values to indicate errors. Instead, if an error occurs, an exception is thrown, and this is the object that encapsulates the detail of the problem. The application using OGRE should always ensure that the exceptions are caught, so all OGRE engine functions should occur within a try{} catch(Ogre::Exception& e) {} block. Really? Every single Ogre function could throw an exception and be wrapped in a try/catch block? At present this is handled in our usage of it by a try/catch in main that will show a message box with the exception description before exiting. This can be a bit awkward for debugging though as you don't get a stack trace, just the function that threw the error - more important is the function from our code that called the Ogre function. If it was an assert in Ogre code then it would go straight to the code in the debugger and I'd be able to find out what's going on much easier - I don't know if I'm missing something that would allow me to debug exceptions already? I'm starting to add a few more try/catch blocks in our code now, generally thinking about whether it matters if the Ogre function throws an exception. If it's something that will stop everything working then let the main try/catch handle it and exit the program. If it's not of great importance then catch it just after the function call and let the program continue. One recent example of this was building up a vector of the vertex/fragment program parameters for materials applied to an entity - if a material didn't have any parameters then it would throw an exception, which I caught and then ignored as it didn't need to add to my list of parameters. Does this seem like a reasonable way of dealing with things? Any specific advice for working with Ogre is much appreciated.

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >