Search Results

Search found 5727 results on 230 pages for 'routed commands'.

Page 15/230 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Commands implicitly threaded in Makefiles ?

    - by apple92
    Hi, I have a "super" makefile which launches two "sub" make file: libwebcam: @echo -e "\nInvoking libwebcam make." $(MAKE) -C $(TOPDIR)/libwebcam uvcdynctrl: @echo -e "\nInvoking uvcdynctrl make." $(MAKE) -C $(TOPDIR)/uvcdynctrl uvcdynctrl uses libwebcam... I noticed that those two builds are launched as separate threads by make ! Thus sometimes the lib is not available when uvcdynctrl starts being built, and I get errors. By default, make should not launch commands as threads since this is available only through -j (number of jobs) and, according to the make manual, there is no thread by default. I run this on an Ubuntu. Did someone face the same issue ? Apple92

    Read the article

  • Using directory traversal attack to execute commands

    - by gAMBOOKa
    Is there a way to execute commands using directory traversal attacks? For instance, I access a server's etc/passwd file like this http://server.com/..%01/..%01/..%01//etc/passwd Is there a way to run a command instead? Like... http://server.com/..%01/..%01/..%01//ls ..... and get an output? EDIT: To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system

    Read the article

  • Command-Line Parsing API from TestAPI library - Type-Safe Commands how to

    - by MicMit
    Library at http://testapi.codeplex.com/ Excerpt of usage from http://blogs.msdn.com/ivo_manolov/archive/2008/12/17/9230331.aspx A third common approach is forming strongly-typed commands from the command-line parameters. This is common for cases when the command-line looks as follows: some-exe COMMAND parameters-to-the-command The parsing in this case is a little bit more involved: Create one class for every supported command, which derives from the Command abstract base class and implements an expected Execute method. Pass an expected command along with the command-line arguments to CommandLineParser.ParseCommand – the method will return a strongly-typed Command instance that can be Execute()-d. // EXAMPLE #3: // Sample for parsing the following command-line: // Test.exe run /runId=10 /verbose // In this particular case we have an actual command on the command-line (“run”), which we want to effectively de-serialize and execute. public class RunCommand : Command { bool? Verbose { get; set; } int? RunId { get; set; } public override void Execute() { // Implement your "run" execution logic here. } } Command c = new RunCommand(); CommandLineParser.ParseArguments(c, args); c.Execute(); ============================ I don't get if we instantiate specific class before parsing arguments , what's the point of command line argument "run" which is very first one. I thought the idea was to instantiate and execute command/class based on a command line parameter ( "run" parameter becomes instance RunCommand class, "walk" becomes WalkCommand class and so on ). Can it be done with the latest version ?

    Read the article

  • Cygwin Cruisecontrol cannot execute commands

    I'm having what I hope to be a simple problem. However, it's had me stumped all day. I'm working with cruisecontrol in windows, being set up through Cygwin. I have some CC experience in the linux platform and much of what I'm doing is very similar. However, most any command I try to execute in the config.xml file's Schedule section is giving an error. Here's the exception: ExecBuilder - Could not execute command: /cygdrive/d/Program\ Files/Subversion/bin/svn net.sourceforge.cruisecontrol.CruiseControlException: Encountered an IO exception while attempting to execute 'net.sourceforge.cruisecontrol.builders.ExecScript@b80f1c'. CruiseControl cannot continue. at net.sourceforge.cruisecontrol.builders.ScriptRunner.runScript(ScriptRunner.java:133) Here are some examples of commands I've tried to run which give this type of error. <exec command="${CCLoc}/projects/${project.name}/IOSdllScript"/> -Runs a script that I tested outside of the cruisecontrol.bat and it runs. Includes #!/bin/sh as the first line <exec command="${CCLoc}/projects/${project.name}/EmptyFile"/> -Essentially an empty text file, proving that the problem had nothing to do with my script. <exec command="/cygdrive/d/Program\ Files/Subversion/bin/svn" args="cleanup" workingdir="${svndir}"/> -Trys svn cleanup on a directory. I double checked the pathing and spelling. One command that I tested worked and didn't give this error. That command was touch. <exec command="touch" args="ABC.txt"/> I'm not sure why only touch seems to work and nothing else does. Thank you for your help.

    Read the article

  • Nhibernate fires SQL commands

    - by Chris
    Hi all, when updating an entity A, NHibernate also send an SQL update command for some other entity B. A and B are not related. Just before saving entity A, the parent of entity B is loaded via a SQLQuery. Then, when accessed, B is lazy loaded (part of a collection). If I save entity A an update statement for entity B is generated as well. How can that be, that when saving an entity, another entity loaded before but is not related to the entity saved, is updated as well?! Can I somehow track where the update comes from? Btw. I am using an save event listener. Could it be that this is always triggered for entity loaded, even though they are not saved explicitly? public class EntitySaveEventListener : NHibernate.Event.Default.DefaultSaveEventListener { protected override object PerformSaveOrUpdate(SaveOrUpdateEvent e) { //auditing return base.PerformSaveOrUpdate(e); } } Update (sorry for providing not enough info): I tracked it down a bit. A select stateement on a entity called address is executed (is it lazy loaded by a parent). Then I create a new entity called Request. Right before saving this entity a session flush is called which updates the address, even though I did not call save or update on the address. Address is a collection within Request. <class name="Request" table="Request"> <bag name="addresses" access="field" cascade="all-delete-orphan" where="IsDeleted = 0"> <key column="RequestId"/> <one-to-many class="Address"/> </bag> ... // address is fetched only NHibernate.SQL: 2010-02-17 11:47:21,306 [21] DEBUG NHibernate.SQL [(null)] - SELECT addresses0_.RequestId as ServiceP8_3_, .... // session flushed here // address is updated NHibernate.SQL: 2010-02-17 11:47:34,306 [21] DEBUG NHibernate.SQL [(null)] - Batch commands: command 0:UPDATE Address SET Street = @p0, ..... Would the address be updated automatically when it is manipulated somehow even though it is not explicitly saved via it's parent (cascade)?

    Read the article

  • Need a better way to execute console commands from python and log the results

    - by Wim Coenen
    I have a python script which needs to execute several command line utilities. The stdout output is sometimes used for further processing. In all cases, I want to log the results and raise an exception if an error is detected. I use the following function to achieve this: def execute(cmd, logsink): logsink.log("executing: %s\n" % cmd) popen_obj = subprocess.Popen(\ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = popen_obj.communicate() returncode = popen_obj.returncode if (returncode <> 0): logsink.log(" RETURN CODE: %s\n" % str(returncode)) if (len(stdout.strip()) > 0): logsink.log(" STDOUT:\n%s\n" % stdout) if (len(stderr.strip()) > 0): logsink.log(" STDERR:\n%s\n" % stderr) if (returncode <> 0): raise Exception, "execute failed with error output:\n%s" % stderr return stdout "logsink" can be any python object with a log method. I typically use this to forward the logging data to a specific file, or echo it to the console, or both, or something else... This works pretty good, except for three problems where I need more fine-grained control than the communicate() method provides: stdout and stderr output can be interleaved on the console, but the above function logs them separately. This can complicate the interpretation of the log. How do I log stdout and stderr lines interleaved, in the same order as they were output? The above function will only log the command output once the command has completed. This complicates diagnosis of issues when commands get stuck in an infinite loop or take a very long time for some other reason. How do I get the log in real-time, while the command is still executing? If the logs are large, it can get hard to interpret which command generated which output. Is there a way to prefix each line with something (e.g. the first word of the cmd string followed by :).

    Read the article

  • Subversion commands not being run by Ubuntu rc.local

    - by talentedmrjones
    Here is my rc.local for an autoscaling amazon ec2 instance based on ubuntu: (Note that user names, domains, and paths have been changed for security purposes) logger "Begin rc.local startup script:" logger "svn checkout" sudo -u nonRootUser /usr/bin/svn co svn+ssh://[email protected]/path/to/repo /var/www/html | logger logger "chown writeable folder" chown www-data /var/www/html/writeableFolder logger "restart apache" /etc/init.d/apache2 restart | logger exit 0 And here is the output of sudo tail -n 40 /var/log/syslog Mar 10 22:05:20 ubuntu logger: Begin rc.local startup script: Mar 10 22:05:20 ubuntu logger: svn checkout Mar 10 22:05:20 ubuntu logger: chown writeable folder Of course its not getting to apache2 restart because it error'd on the chown. I did find however that if I do a checkout beforehand, and set the rc.local svn command to an svn update, that it still does not run the svn command but does output apache2 restart successfully. These same svn commands work perfectly when I run them manually, tho it's strange that within rc.local they do not produce any output whatsoever to logger yet apache2 restart does. I've also tried running the svn co and svn update both with sudo -u and without. How do I get the svn command to run? Either a full checkout or an update. At this point either would be better than nothing!

    Read the article

  • WPF MenuItem ViewModel Command

    - by Jon Archway
    Hi, I am fairly new to WPF and am struggling a little with a scenario. I have a menu which has menu items. When one of these menu items gets clicked a method needs to be called that will do something based upon the text displayed associated with that menu item. So for example, the menu item's content was "test" so I would need to do something with "test". FYI, this "something" directly affects a collection on the ViewModel. This is easy to achieve using the click event and no ViewModel, but I was trying to implement MVVM using an explicit ViewModel. So I started to look into Commands but cannot see how I would pass anything from the View back into the Command in the ViewModel. Any suggestions on what I should be doing here? Thanks

    Read the article

  • Strange behavior in DockPanel

    - by plotnick
    I don't understand, I have a toolbar with buttons bind to custom commands. Also I have an expandable control docked to the left of window - kinda NavPanel. (Devcomponents' NavigationPane to be exact) Now, everytime when it's collapsed or expanded, buttons in the toolbar become disabled and stay like that till the focus changes. Of course, it's simple to change the focus inside Collapsed and Expanded events, but unfortunately it works only in the first and ignores the second one and all buttons stay disabled. It seems that it something to do with CommandTarget which I haven't define nowhere. Maybe I should? Any ideas?

    Read the article

  • What is the accepted pattern for WPF commanding in MVVM?

    - by Robert S.
    I'm working on a WPF app and I understand the command pattern pretty well, but I've found that there are several different implementations of the command pattern for MVVM. There's Josh Smith's implementation in his WPF sample app, the DelegateCommand from Prism, and the CommandBindings implementation. My question is, what is the generally accepted best practice for using commands with MVVM? My application uses Prism so DelegateCommand is available to us. The devs on my team are arguing about which approach is "best." Some don't like the numerous .cs files generated for each command, others prefer that everything be wired up via CommandBindings. I'm at a loss. Can anyone shed some light?

    Read the article

  • MFC resource.h command/message IDs

    - by ak
    Hi I'm working on an MFC application, that got pretty messy over years and over different teams of developers. The resource.h file, which contains all command/message mappings grew pretty big over time, and has lots of problems (like duplicate IDs). I am not proficient with MFC, so the question might sound pretty stupid... MSDN docs mention that Command IDs and Message IDs should not be less than WM_USER and WM_APP correspondingly. I saw that most of the command IDs in resource.h generated by Visual Studio begin around 100. Shouldn't this cause some interfering with MFC/Windows commands and messages, that overlap with the application defined IDs? For example, I have a command ID : #define ID_MY_ID 101 and there is a windows command that has the same ID. When MC send this command to the APP, it's handled like an application defined ID_MY_ID, and the app is taking unnecessary actions. Is it a possible scenario? Also, is there some third party tool that helps to profile the project resources?

    Read the article

  • Adb shell commands to change settings or perform tasks on a phone

    - by Noah
    How do I use adb to perform some automated tasks on my android phone? I need to find commands that I can issue from the command line (ideally, using a .bat file) that will be capable of more than simply opening an application or sending an input keyevent (button press). For instance, I want to toggle Airplane Mode on or off from the command line. Currently, the best I can do is launch the Wireless & network settings menu and then use input keyevents to click Airplane mode: adb shell am start -a android.settings.AIRPLANE_MODE_SETTINGS adb shell input keyevent 19 & adb shell input keyevent 23 There are quite a few drawbacks to this method, primarily that the screen has to be on and unlocked. Also, the tasks I want to do are much broader than this simple example. Other things I'd like to do if possible: 1.Play an mp3 and set it on repeat. Current solution: adb shell am start -n com.android.music/.MusicBrowserActivity adb shell input keyevent 84 adb shell input keyevent 56 & adb shell input keyevent 66 & adb shell input keyevent 67 & adb shell input keyevent 19 adb shell input keyevent 23 & adb shell input keyevent 21 adb shell input keyevent 19 & adb shell input keyevent 19 & adb shell input keyevent 19 & adb shell input keyevent 22 & adb shell input keyevent 22 & adb shell input keyevent 23 & adb shell input keyevent 23 2.Play a video. (current solution: open MediaGallery, send keyevents, similar to above) 3.Change the volume (current solution: send volume-up button keyevents) 4.Change the display timeout (current solution: open sound & display settings, send keyevents) As before, these all require the screen to be on and unlocked. The other major drawback to using keyevents is if the UI of the application is changed, the keyevents will no longer perform the correct function. If there is no easier way to do these sort of things, is there at least a way to turn the screen on (using adb) when it is off? Or to have keyevents still work when the screen is off? I'm not very familiar with java. That said, I've seen code like the following (source: http://yenliangl.blogspot.com/2009/12/toggle-airplane-mode.html) to change a setting on the phone: Settings.System.putInt(Settings.System.AIRPLANE_MODE_ON, 1 /* 1 or 0 */); How do I translate something like the above into an adb shell command? Is this possible, or the wrong way to think about it? I can provide more details if needed. Thanks!

    Read the article

  • How do you mock ViewModel Commands using moq?

    - by devnet247
    Hi I might be approaching this all wrong.But please help me to understand. I really want to TDD building wpf application using Moq. I would like to mock the viewmodel. Application Show a list of contacts and when you double click on a contact it shows the contact. Test Moq GetContactsCommand.Test it has been called. Test that you get a list of contacts. Not sure how to mock the viewModel and it's commands can you correct me? So I have started to do the following [Test] public void Should_be_able_to_mock_getContactsCommand_and_get_a_list_of_contacts() { //Arrange var expectedContacts = new ObservableCollection<ContactViewModel> { new ContactViewModel(new ContactModel { FirstName = "Jo", LastName = "Bloggs", Email = "[email protected]" }), new ContactViewModel(new ContactModel { FirstName = "Mary", LastName = "Bloggs", Email = "[email protected]" }) }; var mock = new Mock<IContactListViewModel>(); mock.SetupGet(x => x.GetContactsCommand).Verifiable(); mock.SetupGet(x => x.Contacts).Returns(expectedContacts); //Act //? //assert mock.VerifySet(x => x.Contacts, Times.AtLeastOnce()); mock.Object.Contacts.Count.ShouldEqual(expectedContacts.Count); } public interface IContactListViewModel { ObservableCollection<ContactViewModel> Contacts { get; set; } ICommand GetContactsCommand{ get; } } public interface IContactModel { string FirstName { get; set; } string LastName { get; set; } string Email { get; set; } } public class ContactModel : IContactModel { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } } public class ContactViewModel : ViewModelBase { private readonly ContactModel _contactModel; public ContactViewModel(ContactModel contactModel) { _contactModel = contactModel; } public string FirstName { get { return _contactModel.FirstName; } set { _contactModel.FirstName = value; OnPropertyChanged("FirstName"); } } public string LastName { get { return _contactModel.LastName; } set { _contactModel.LastName = value; OnPropertyChanged("LastName"); } } public string Email { get { return _contactModel.Emai; } set { _contactModel.Email = value; OnPropertyChanged("Email"); } } } public class ContactListViewModel : ViewModelBase, IContactListViewModel { private ObservableCollection<ContactViewModel> _contacts; public ObservableCollection<ContactViewModel> Contacts { get { return _contacts; } set { _contacts = value; OnPropertyChanged("Contacts"); } } private RelayCommand _getContactsCommand; public ICommand GetContactsCommand { get { return _getContactsCommand ?? (_getContactsCommand = new RelayCommand(x => GetContacts(), x => CanGetContacts)); } } private static bool CanGetContacts { get { return true; } } private void GetContacts() { //pretend we are going to the service or db whatever Contacts = new ObservableCollection<ContactViewModel> { new ContactViewModel(new ContactModel { FirstName = "Jo", LastName = "Bloggs", Email = "[email protected]" }), new ContactViewModel(new ContactModel { FirstName = "Mary", LastName = "Bloggs", Email = "[email protected]" }) }; } }

    Read the article

  • Checking if a RoutedEvent has any handlers

    - by AK
    I've got a custom Button class, that always performs the same action when it gets clicked (opening a specific window). I'm adding a Click event that can be assigned in the button's XAML, like a regular button. When it gets clicked, I want to execute the Click event handler if one has been assigned, otherwise I want to execute the default action. The problem is that there's apparently no way to check if any handlers have been added to an event. I thought a null check on the event would do it: if (Click == null) { DefaultClickAction(); } else { RaiseEvent(new RoutedEventArgs(ClickEvent, this));; } ...but that doesn't compile. The compiler tells me that I can't do anything other than += or -= to an event outside of the defining class, event though I'm trying to do this check INSIDE the defining class. I've implemented the correct behavior myself, but it's ugly and verbose and I can't believe there isn't a built-in way to do this. I must be missing something. Here's the relevant code: public class MyButtonClass : Control { //... public static readonly RoutedEvent ClickEvent = EventManager.RegisterRoutedEvent("Click", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(MyButtonClass)); public event RoutedEventHandler Click { add { ClickHandlerCount++; AddHandler(ClickEvent, value); } remove { ClickHandlerCount--; RemoveHandler(ClickEvent, value); } } private int ClickHandlerCount = 0; private Boolean ClickHandlerExists { get { return ClickHandlerCount > 0; } } //... }

    Read the article

  • WPF Style Override breaks Validation Error event propagation

    - by Ben McMillan
    I have a custom control that overrides Window: public class Window : System.Windows.Window { static Window() { DefaultStyleKeyProperty.OverrideMetadata(typeof(Window), new System.Windows.FrameworkPropertyMetadata(typeof(Window))); } ... } It also has a style: <Style TargetType="{x:Type Controls:Window}" BasedOn="{StaticResource {x:Type Window}}"> <Setter Property="WindowStyle" Value="None" /> <Setter Property="Padding" Value="5" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Controls:Window}"> ... Unfortunately, this breaks the propagation of the Validation.ErrorEvent for my window's contents. That is, my window can receive the event just fine, but I don't know what to do with it to mimic how a standard Window (or whoever) deals with it. If the validating controls are placed in a standard window, they work. They also work if I just take out the OverrideMetadata call (leaving them inside my custom window). Why is this happening, and how can I get the stock functionality for handling these validation error events working again? Thanks!

    Read the article

  • Disable Return key outside textareas on a Asp.Net web page (containing ajax code)

    - by Achim
    Hi, I have an Asp.Net web page, having the common Asp.Net form. The outer "border" of the page (i.e. main menu, header, ...) is build using normal Asp.Net code using a master page. The content of that page uses jQuery to display dynamic forms and to send data to the server. If I push the return key on that page, I jump to a (more or less) random page - which is not what the user expects. ;-) There are some text areas and the user must be able to enter line breaks. Otherwise it would be fine to disable the return key completely. Any bullet proof way to do that? I found some solutions on the web, which capture the keypress event and ignore \x13, but that does not really work. It works as long as the page has just loaded, but as soon as I have clicked on some elements, the return key behaves as usuall. Any hint would be really appreciated! Achim

    Read the article

  • WPF TextBox Interceping RoutedUICommands

    - by Joseph Sturtevant
    I am trying to get Undo/Redo keyboard shortcuts working in my WPF application (I have my own custom functionality implemented using the Command Pattern). It seems, however, that the TextBox control is intercepting my "Undo" RoutedUICommand. What is the simplest way to disable this so that I can catch Ctrl+Z at the root of my UI tree? I would like to avoid putting a ton of code/XAML into each TextBox in my application if possible. The following briefly demonstrates the problem: <Window x:Class="InputBindingSample.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:loc="clr-namespace:InputBindingSample" Title="Window1" Height="300" Width="300"> <Window.CommandBindings> <CommandBinding Command="loc:Window1.MyUndo" Executed="MyUndo_Executed" /> </Window.CommandBindings> <DockPanel LastChildFill="True"> <StackPanel> <Button Content="Ctrl+Z Works If Focus Is Here" /> <TextBox Text="Ctrl+Z Doesn't Work If Focus Is Here" /> </StackPanel> </DockPanel> </Window> using System.Windows; using System.Windows.Input; namespace InputBindingSample { public partial class Window1 { public static readonly RoutedUICommand MyUndo = new RoutedUICommand("MyUndo", "MyUndo", typeof(Window1), new InputGestureCollection(new[] { new KeyGesture(Key.Z, ModifierKeys.Control) })); public Window1() { InitializeComponent(); } private void MyUndo_Executed(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("MyUndo!"); } } }

    Read the article

  • AvalonDock + UserControl + DataGrid + ContextMenu command routing issue

    - by repka
    I have this kind of layout: <Window x:Class="DockAndMenuTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" Title="MainWindow" Height="350" Width="525"> <ad:DockingManager> <ad:DocumentPane> <ad:DockableContent Title="Doh!"> <UserControl> <UserControl.CommandBindings> <CommandBinding Command="Zoom" Executed="ExecuteZoom" CanExecute="CanZoom"/> </UserControl.CommandBindings> <DataGrid Name="_evilGrid"> <DataGrid.Resources> <Style TargetType="DataGridRow"> <Setter Property="ContextMenu"> <Setter.Value> <ContextMenu> <MenuItem Command="Zoom"/> </ContextMenu> </Setter.Value> </Setter> </Style> </DataGrid.Resources> </DataGrid> </UserControl> </ad:DockableContent> </ad:DocumentPane> </ad:DockingManager> </Window> Briefly: ContextMenu is set for each DataGridRow of DataGrid inside UserControl, which in its turn is inside DockableContent of AvalonDock. Code-behind is trivial as well: public partial class MainWindow { public MainWindow() { InitializeComponent(); _evilGrid.ItemsSource = new[] { Tuple.Create(1, 2, 3), Tuple.Create(4, 4, 3), Tuple.Create(6, 7, 1), }; } private void ExecuteZoom(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("zoom !"); } private void CanZoom(object sender, CanExecuteRoutedEventArgs e) { e.CanExecute = true; } } So here's the problem: right-clicking on the selected row (if it it was selected before the right click) my command comes out disabled. The command is "Zoom" in this case, but can be any other, including a custom one. If I get rid of either docking or UserControl around my grid there are no problems. ListBox doesn't have this issue either. So I don't know what's at fault here. SNOOP shows that in cases when this propagation fails, instead of UserControl, CanExecute is handled by PART_ShowContextMenuButton (Button), which is part of docking header. I've had other issues with UI command propagation within UserControls hosted inside AvalonDock, but this one is the easiest to reproduce.

    Read the article

  • raising events passing parameters in wpf

    - by Thiago
    Hi, I'd like to add tabs to my window when an item in the GridView is double-clicked. But the tab that will be added depends on the clicked item. Which way should I do this on WPF? I thought about RoutedEvents, but I don't know how to pass a parameter with it. Any suggestions?

    Read the article

  • MS Surface Tag Visualizer steals contact events

    - by Isak Savo
    I'm struggling with the TagVisualizer control on an MS Surface project. In theory the control seems great, allowing you to respond to input from real world physical objects The problem is that the control will cover the entire screen (since I want to capture tags on the entire screen) and as such, no other controls in my app will receive the touch events. (Unless, they are direct ascendants in the visual tree). In my app, I want to have a "layer" type of a approach, where each layer can respond to (contact) input: Window `- Grid `- LayersPanel `- TagVisualizer `- Layer 1 `- Layer 2 `- Layer 3 `- Layer 4 Now it doesn't matter where I put the tag visualizer, it's always going to steal contact events from all or some of the other layers. (due to the nature of RoutedEvents) To me, it seems like the control is completely useless in practice as it will always interfere with your application's other controls. What am I missing here? So my questions are: Any suggestions on how to work around this? Has anyone used TagVisualizers in a similar scenario? If so, how did you solve this? By the way, the layers all work fine, since they will only steal events that are directly on top of their sub elements (the rest of the layer is invisible to hit testing)

    Read the article

  • How do I swallow the dropdown behavior inside an Expander.Header?

    - by Peter Seale
    Hello, I would like to prevent an Expander from expanding/collapsing when users click inside the header area. This is basically the same question as Q 1396153, but I'd appreciate a more favorable answer :) Is there a non-invasive way to do this? I am not sure exactly how to attach behavior to the Expander.Header content to prevent mouseclicks. I'm willing to float in content outside the expander itself via a fixed grid layout, but I'm not keen on the solution. Ideas? XamlPad sample XAML: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Expander> <Expander.Header><TextBlock>When I click this text, I don't want to trigger expansion/collapse! Only when I click the expander button do I want to trigger an expand/collapse!</TextBlock></Expander.Header> <Grid Background="Red" Height="100" Width="100" > </Grid> </Expander> </Page>

    Read the article

  • WPF : Command routing for Keyboard shortcuts.

    - by Sprotty
    Basically I want to create a keyboard shortcut which is valid within the scope of a window, and not just enabled when focus is within the control that binds it. in more detail.... I have a window which has 3 controls a toolbar textbox Custom Control The toolbar has a button bound to the Command CustomCommands.CmdA and linked to 'Ctrl-T'. My Custom Control can process CmdA. When I run the app and click on my custom control CmdA is enabled and works fine. Also Ctrl-T cause the command to fire. However when I select the text box, my custom command CmdA becomes disabled. I can rectify this by setting the command target for CmdA's button. Now when I select the textBox, CmdA is still enabled. But the Keyboard shortcut Ctrl-T does nothing. Is there any easy way to change the scope of keyboard shortcuts? Or do I need to catch the keypress somewhere lower down, and work out which Command it relates to and route it myself (if so is there a framework within which to do this?) Many Thanks Simon

    Read the article

  • How to find history of shell commands since machine was created?

    - by Edward Tanguay
    I created an Ubuntu virtualbox machine a couple weeks ago and have been working on projects off and on in it since then. Now I would like to find the syntax of some commands I typed in the terminal a week ago, but I have opened and closed the terminal window and restarted the machine numerous times. How can I get the history command to go back to the first command I typed after I created the machine, or is there another place that all the commands are stored in Ubuntu?

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >