Search Results

Search found 57327 results on 2294 pages for 'nested set'.

Page 391/2294 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • Running a WebLogic Portal (WLP) 10.3.4 Domain as a Windows Service

    - by user647124
    To start a WLP server as a Windows service it is simplest to make your own script based on the provided standard script located at WL_HOME\server\bin\installSvc.cmd. The standard script works fine for a plain WLS domain, but lacks some classpath and options necessary for WLP.Start by making a copy of the installSvc.cmd script and naming it something specific to your domain.Next, just under SETLOCAL you will find where WL_HOME is defined. Here you will add the definitions you would normally add in a script that later calls installSvc.cmd (as per the standard documentation). set DOMAIN_NAME=gnma_test_domainset USERDOMAIN_HOME=D:\my_test_domainset SERVER_NAME=AdminServerset WLS_USER=weblogicset WLS_PW=gnmaAdmin01set PRODUCTION_MODE=trueset MEM_ARGS=-Xms512m –Xmx512mset MW_HOME=C:\Oracle\Middleware Note: I had heard of people using this approach who had issues with the length of the command line. This may be due to their use of the default domain path. In the example above, I use a shorter path.At this point, edit the DOMAIN_HOME\bin\startWebLogic.cmd and set it to echo both the classpath and the options. Then start the domain and capture the output of those echoes, then shut the domain back down. Now REM out the existing CLASSPATH definition, then use the outputs you captured earlier to set the CLASSPATH and JAVA_OPTIONS like this: REM set CLASSPATH=%WEBLOGIC_CLASSPATH%;%CLASSPATH%; C:\Oracle\Middleware\wlportal_10.3\portal\lib\security\wsrp-security-providers.jarset CLASSPATH=%MW_HOME%\patch_wls1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_wlp1034\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_oepe1111\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\patch_ocm1033\profiles\default\sys_manifest_classpath\weblogic_patch.jar;%MW_HOME%\JROCKI~1.1-3\lib\tools.jar;%WL_HOME%\server\lib\weblogic_sp.jar;%WL_HOME%\server\lib\weblogic.jar;%MW_HOME%\modules\features\weblogic.server.modules_10.3.4.0.jar;%WL_HOME%\server\lib\webservices.jar;%MW_HOME%\modules\ORGAPA~1.1/lib/ant-all.jar;%MW_HOME%\modules\NETSFA~1.0_1/lib/ant-contrib.jar;%WL_HOME%\common\derby\lib\derbyclient.jar;%WL_HOME%\server\lib\xqrl.jar;%WL_HOME%\server\lib\xquery.jar;%WL_HOME%\server\lib\binxml.jarset JAVA_OPTIONS= -Xverify:none -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dplatform.home=%WL_HOME% -Dwls.home=%WL_HOME%\server -Dweblogic.home=%WL_HOME%\server -Dweblogic.wsee.bind.suppressDeployErrorMessage=true -Dweblogic.wsee.skip.async.response=true -Dweblogic.management.discover=true -Dwlw.iterativeDev=true -Dwlw.testConsole=true -Dwlw.logErrorsToConsole=true -Dweblogic.ext.dirs=%MW_HOME%\patch_wls1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_wlp1034\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_oepe1111\profiles\default\sysext_manifest_classpath;%MW_HOME%\patch_ocm1033\profiles\default\sysext_manifest_classpath;%MW_HOME%\wlportal_10.3\p13n\lib\system;%MW_HOME%\wlportal_10.3\light-portal\lib\system;%MW_HOME%\wlportal_10.3\portal\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\lib\system;%MW_HOME%\wlportal_10.3\analytics\lib\system;%MW_HOME%\wlportal_10.3\apps\lib\system;%MW_HOME%\wlportal_10.3\info-mgmt\deprecated\lib\system;%MW_HOME%\wlportal_10.3\content-mgmt\lib\system -Dweblogic.alternateTypesDirectory=%MW_HOME%\wlportal_10.3\portal\lib\securityAnd that's it. Looks really simple, but it took me quite some time to gather all the necessary pieces in order to make it work. Hopefully you find this before you went through half as much research.The example here uses a domain with only the Admin server and no managed servers. For a variety of reasons I only want the Admin server to be run as a service. The standard documentation along with the example above should allow you to expand this to include managed servers should you feel the need.

    Read the article

  • Sd card bigger than 2gb is not recognized in ubuntu 12.04

    - by dex1
    When I insert a card up to 2gb it is immediately seen by the system but if try it with bigger one it's not seen. I presume the issue is not due to the card reader itself as it reads all cards under windows 7 but due to linux driver. I could see some people having similar issues but no solution. Any help appreciated. GParted doesnt see cards bigger than 2gb. After insertion small card ubuntu@ubuntu:~$ dmesg [10169.384481] mmc0: new SD card at address a95c [10169.384870] mmcblk0: mmc0:a95c SD016 14.0 MiB [10169.386715] mmcblk0: p1 everything worked fine then I removed the small one and put 8gb, waited for 2min [10295.736422] mmc0: card a95c removed [10362.448383] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10372.480076] mmc0: Timeout waiting for hardware interrupt. [10382.496146] mmc0: Timeout waiting for hardware interrupt. [10392.512149] mmc0: Timeout waiting for hardware interrupt. [10402.528145] mmc0: Timeout waiting for hardware interrupt. [10402.529267] mmc0: error -110 whilst initialising SD card [10402.748807] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10412.768063] mmc0: Timeout waiting for hardware interrupt. [10422.784051] mmc0: Timeout waiting for hardware interrupt. [10432.800076] mmc0: Timeout waiting for hardware interrupt. [10442.816067] mmc0: Timeout waiting for hardware interrupt. [10442.817165] mmc0: error -110 whilst initialising SD card [10443.040805] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10453.056145] mmc0: Timeout waiting for hardware interrupt. [10463.072139] mmc0: Timeout waiting for hardware interrupt. [10473.088050] mmc0: Timeout waiting for hardware interrupt. [10483.104046] mmc0: Timeout waiting for hardware interrupt. [10483.104107] mmc0: error -110 whilst initialising SD card [10483.328960] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10493.344144] mmc0: Timeout waiting for hardware interrupt. ubuntu@ubuntu:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (secondary) (rev 03) 00:1a.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3) 00:1f.0 ISA bridge: Intel Corporation 82801HM (ICH8M) LPC Interface Controller (rev 03) 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03) 07:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8072 PCI-E Gigabit Ethernet Controller (rev 16) 0a:01.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 0a:01.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02) 0a:01.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) Same cards, same machine (same reader) only different OS(win7) work flawlessly. Some interesting reading I came across but is Chinese for me http://www.mail-archive.com/[email protected]/msg14598.html and another bit http://article.gmane.org/gmane.linux.kernel.mmc/11973/match=sd+card+not+recognized

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • Roles / Profiles / Perspectives in NetBeans IDE 7.1

    - by Geertjan
    With a check out of main-silver from yesterday, I'm able to use the brand new "role" attribute in @TopComponent.Registration, as you can see below, in the bit in bold: @ConvertAsProperties(dtd = "-//org.role.demo.ui//Admin//EN", autostore = false) @TopComponent.Description(preferredID = "AdminTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "editor", openAtStartup = true, role="admin") public final class AdminTopComponent extends TopComponent { And here's a window for general users of the application, with the "role" attribute set to "user": @ConvertAsProperties(dtd = "-//org.role.demo.ui//User//EN", autostore = false) @TopComponent.Description(preferredID = "UserTopComponent", //iconBase="SET/PATH/TO/ICON/HERE", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "explorer", openAtStartup = true, role="user") public final class UserTopComponent extends TopComponent { So, I have two windows. One is assigned to the "admin" role, the other to the "user" role. In the "ModuleInstall" class, I add a "WindowSystemListener" and set "user" as the application's role: public class Installer extends ModuleInstall implements WindowSystemListener { @Override public void restored() { WindowManager.getDefault().addWindowSystemListener(this); } @Override public void beforeLoad(WindowSystemEvent event) { WindowManager.getDefault().setRole("user"); WindowManager.getDefault().removeWindowSystemListener(this); } @Override public void afterLoad(WindowSystemEvent event) { } @Override public void beforeSave(WindowSystemEvent event) { } @Override public void afterSave(WindowSystemEvent event) { } } So, when the application starts, the "UserTopComponent" is shown, not the "AdminTopComponent". Next, I have two Actions, for switching between the two roles, as shown below: @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToAdminAction") @ActionRegistration(displayName = "#CTL_SwitchToAdminAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToAdminAction=Switch To Admin") public final class SwitchToAdminAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("admin"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("admin"); } } @ActionID(category = "Window", id = "org.role.demo.ui.SwitchToUserAction") @ActionRegistration(displayName = "#CTL_SwitchToUserAction") @ActionReferences({ @ActionReference(path = "Menu/Window", position = 250) }) @Messages("CTL_SwitchToUserAction=Switch To User") public final class SwitchToUserAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { WindowManager.getDefault().setRole("user"); } @Override public boolean isEnabled() { return !WindowManager.getDefault().getRole().equals("user"); } } When I select one of the above actions, the role changes, and the other window is shown. I could, of course, add a Login dialog to the "SwitchToAdminAction", so that authentication is required in order to switch to the "admin" role. Now, let's say I am now in the "user" role. So, the "UserTopComponent" shown above is now opened. I decide to also open another window, the Properties window, as below... ...and, when I am in the "admin" role, when the "AdminTopComponent" is open, I decide to also open the Output window, as below... Now, when I switch from one role to the other, the additional window/s I opened will also be opened, together with the explicit members of the currently selected role. And, the main window position and size are also persisted across roles. When I look in the "build" folder of my project in development, I see two different Windows2Local folders, one per role, automatically created by the fact that there is something to be persisted for a particular role, e.g., when a switch to a different role is done: And, with that, we now clearly have roles/profiles/perspectives in NetBeans Platform applications from NetBeans Platform 7.1 onwards.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Problem with creating a deterministic finite automata (DFA) - Mercury

    - by Jabba The hut
    I would like to have a deterministic finite automata (DFA) simulated in Mercury. But I’m s(t)uck at several places. Formally, a DFA is described with the following characteristics: a setOfStates S, an inputAlphabet E <-- summation symbol, a transitionFunction : S × E -- S, a startState s € S, a setOfAcceptableFinalStates F =C S. A DFA will always starts in the start state. Then the DFA will read all the characters on the input, one by one. Based on the current input character and the current state, there will be made to a new state. These transitions are defined in the transitions function. when the DFA is in one of his acceptable final states, after reading the last character, then will the DFA accept the input, If not, then the input will be is rejected. The figure shows a DFA the accepting strings where the amount of zeros, is a plurality of three. Condition 1 is the initial state, and also the only acceptable state. for each input character is the corresponding arc followed to the next state. Link to Figure What must be done A type “mystate” which represents a state. Each state has a number which is used for identification. A type “transition” that represents a possible transition between states. Each transition has a source_state, an input_character, and a final_state. A type “statemachine” that represents the entire DFA. In the solution, the DFA must have the following properties: The set of all states, the input alphabet, a transition function, represented as a set of possible transitions, a set of accepting final states, a current state of the DFA A predicate “init_machine (state machine :: out)” which unifies his arguments with the DFA, as shown as in the Figure. The current state for the DFA is set to his initial state, namely, 1. The input alphabet of the DFA is composed of the characters '0'and '1'. A user can enter a text, which will be controlled by the DFA. the program will continues until the user types Ctrl-D and simulates an EOF. If the user use characters that are not allowed into the input alphabet of the DFA, then there will be an error message end the program will close. (pred require) Example Enter a sentence: 0110 String is not ok! Enter a sentence: 011101 String is not ok! Enter a sentence: 110100 String is ok! Enter a sentence: 000110010 String is ok! Enter a sentence: 011102 Uncaught exception Mercury: Software Error: Character does not belong to the input alphabet! the thing wat I have. :- module dfa. :- interface. :- import_module io. :- pred main(io.state::di, io.state::uo) is det. :- implementation. :- import_module int,string,list,bool. 1 :- type mystate ---> state(int). 2 :- type transition ---> trans(source_state::mystate, input_character::bool, final_state::mystate). 3 (error, finale_state and current_state and input_character) :- type statemachine ---> dfa(list(mystate),list(input_character),list(transition),list(final_state),current_state(mystate)) 4 missing a lot :- pred init_machine(statemachine :: out) is det. %init_machine(statemachine(L_Mystate,0,L_transition,L_final_state,1)) :- <-probably fault 5 not perfect main(!IO) :- io.write_string("\nEnter a sentence: ", !IO), io.read_line_as_string(Input, !IO), ( Invoer = ok(StringVar), S1 = string.strip(StringVar), (if S1 = "mustbeabool" then io.write_string("Sentenceis Ok! ", !IO) else io.write_string("Sentence is not Ok!.", !IO)), main(!IO) ; Invoer = eof ; Invoer = error(ErrorCode), io.format("%s\n", [s(io.error_message(ErrorCode))], !IO) ). Hope you can help me kind regards

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • Workarounds for supporting MVVM in the Silverlight ContextMenu service

    - by cibrax
    As I discussed in my last post, some of the Silverlight controls does not support MVVM quite well out of the box without specific customizations. The Context Menu is another control that requires customizations for enabling data binding on the menu options. There are a few things that you might want to expose as view model for a menu item, such as the Text, the associated icon or the command that needs to be executed. That view model should look like this, public class MenuItemModel { public string Name { get; set; } public ICommand Command { get; set; } public Image Icon { get; set; } public object CommandParameter { get; set; } } This is how you can modify the built-in control to support data binding on the model above, public class CustomContextMenu : ContextMenu { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem(); Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   Binding commandParameter = new Binding("CommandParameter"); item.SetBinding(CustomMenuItem.CommandParameterProperty, commandParameter);   return item; } }   public class CustomMenuItem : MenuItem { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem();   Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   return item; } } The change is very similar to the one I made in the TreeView for manually data binding some of the Menu item properties to the model. Once you applied that change in the control, you can define it in your XAML like this. <toolkit:ContextMenuService.ContextMenu> <e:CustomContextMenu ItemsSource="{Binding MenuItems}"> <e:CustomContextMenu.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" > <ContentPresenter Margin="0 0 4 0" Content="{Binding Icon}" /> <TextBlock Margin="0" Text="{Binding Name, Mode=OneWay}" FontSize="12"/> </StackPanel> </DataTemplate> </e:CustomContextMenu.ItemTemplate> </e:CustomContextMenu> </toolkit:ContextMenuService.ContextMenu> The property MenuItems associated to the “ItemsSource” in the parent model just returns a list of supported options (menu items) in the context menu. this.menuItems = new MenuItemModel[] { new MenuItemModel { Name = "My Command", Command = new RelayCommand(OnCommandClick), Icon = ImageLoader.GetIcon("command.png") } }; The only problem I found so far with this approach is that the context menu service does not support a HierarchicalDataTemplate in case you want to have an hierarchy in the context menu (MenuItem –> Sub menu items), but I guess we can live without that.

    Read the article

  • Passing data between the VirtualBox Host and the Guest

    - by Fat Bloke
    Here's a good question: "How can you figure out the VM name from within the VM itself?" While this data is not automatically available, the general purpose, and very powerful VirtualBox "GuestProperty" APIs can be used from the host and guest to pass arbitrary data, in key/value pairs format, in and out of the guest. Note that this does require that the VirtualBox Guest Additions have been installed in the guest. To play with this, try using the "VBoxManage" command line on your VirtualBox host machine, and "VBoxControl" in the guest. Host syntax VBoxManage guestproperty get <vmname>|<uuid> <property> [--verbose] VBoxManage guestproperty set <vmname>|<uuid> <property> [<value> [--flags <flags>]] VBoxManage guestproperty enumerate <vmname>|<uuid> [--patterns <patterns>] VBoxManage guestproperty wait <vmname>|<uuid> <patterns> [--timeout <msec>] [--fail-on-timeout]   Guest syntax VBoxControl.exe guestproperty        get <property> [-verbose] VBoxControl.exe guestproperty        set <property> [<value> [-flags <flags>]] VBoxControl.exe guestproperty        enumerate [-patterns <patterns>] VBoxControl.exe guestproperty        wait <patterns>                                      [-timestamp <last timestamp>]                                      [-timeout <timeout in ms>  So to solve our problem above, we set the vm name in the Host system on an arbitrary key like this: $ VBoxManage guestproperty set "Windows 7 (x64)" /MyData/VMname "Windows 7 (x64)" And within the guest we can use: C:\Program Files\Oracle\VirtualBox Guest Additions>VBoxControl.exe guestproperty get /MyData/VMname Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 4.1.14 (C) 2008-2012 Oracle Corporation All rights reserved. Value: Windows 7 (x64) The GuestProperty API is pretty powerful, so for the interested, get more info in the User Manual. - FB 

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • Passing a parameter using RelayCommand defined in the ViewModel (from Josh Smith example)

    - by eesh
    I would like to pass a parameter defined in the XAML (View) of my application to the ViewModel class by using the RelayCommand. I followed Josh Smith's excellent article on MVVM and have implemented the following. XAML Code <Button Command="{Binding Path=ACommandWithAParameter}" CommandParameter="Orange" HorizontalAlignment="Left" Style="{DynamicResource SimpleButton}" VerticalAlignment="Top" Content="Button"/> ViewModel Code public RelayCommand _aCommandWithAParameter; /// <summary> /// Returns a command with a parameter /// </summary> public RelayCommand ACommandWithAParameter { get { if (_aCommandWithAParameter == null) { _aCommandWithAParameter = new RelayCommand( param => this.CommandWithAParameter("Apple") ); } return _aCommandWithAParameter; } } public void CommandWithAParameter(String aParameter) { String theParameter = aParameter; } #endregion I set a breakpoint in the CommandWithAParameter method and observed that aParameter was set to "Apple", and not "Orange". This seems obvious as the method CommandWithAParameter is being called with the literal String "Apple". However, looking up the execution stack, I can see that "Orange", the CommandParameter I set in the XAML is the parameter value for RelayCommand implemenation of the ICommand Execute interface method. That is the value of parameter in the method below of the execution stack is "Orange", public void Execute(object parameter) { _execute(parameter); } What I am trying to figure out is how to create the RelayCommand ACommandWithAParameter property such that it can call the CommandWithAParameter method with the CommandParameter "Orange" defined in the XAML. Is there a way to do this? Why do I want to do this? Part of "On The Fly Localization" In my particular implementation I want to create a SetLanguage RelayCommand that can be bound to multiple buttons. I would like to pass the two character language identifier ("en", "es", "ja", etc) as the CommandParameter and have that be defined for each "set language" button defined in the XAML. I want to avoid having to create a SetLanguageToXXX command for each language supporting and hard coding the two character language identifier into each RelayCommand in the ViewModel.

    Read the article

  • Android ADT Eclipse plugin, parseSDKContent failed

    - by Sebastian Ganslandt
    I've just set up my first Android development environment consisting of Eclipse 3.5 Mac OSX 10.5 Android SDK for x86 macs ADT Eclipse plugin 0.9.6 I've set set $PATH to my SDK/tools directory (which shouldn't matter if I only use Eclipse right?) and started Eclipse, but when I try to set the path to the SDK in Eclipse, i get the error "parseSdkContent failed". The stack trace of from the thrown exception is java.lang.IllegalArgumentException: http://www.w3.org/2001/XMLSchema at javax.xml.validation.SchemaFactory.newInstance(SchemaFactory.java:181) at com.android.ide.eclipse.adt.internal.sdk.LayoutDevicesXsd.getValidator(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.parseLayoutDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.loadDefaultLayoutDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.LayoutDeviceManager.loadDefaultAndUserDevices(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.Sdk.<init>(Unknown Source) at com.android.ide.eclipse.adt.internal.sdk.Sdk.loadSdk(Unknown Source) at com.android.ide.eclipse.adt.AdtPlugin$13.run(Unknown Source) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) I can't see that I've missed anything in the setup process, according to the instructions it should basically just work out of the box. Any ideas as to why this might fail?

    Read the article

  • XmlSerializer one Class that has multiple classes properties

    - by pee2002
    Hi there! Objective: Serialize all the public properties of the ClassMain: public class ClassMain { ClassA classA = new ClassA(); public ClassMain() { classA.Prop1 = "Prop1"; classA.Prop2 = "Prop2"; } public string Prop1 { get; set; } public string Prop2 { get; set; } public ClassA ClassA { get { return classA; } } } ClassA: public class ClassA { public string Prop1 { get; set; } public string Prop2 { get; set; } } This is how i´m serializing: private void button1_Click(object sender, EventArgs e) { ClassMain classMain = new ClassMain(); classMain.Prop1 = "Prop1"; classMain.Prop2 = "Prop2"; XmlSerializer mySerializer = new XmlSerializer(typeof(ClassMain)); StreamWriter myWriter = new StreamWriter("xml1.xml"); mySerializer.Serialize(myWriter, classMain); myWriter.Close(); } In this case, the xml output is: <?xml version="1.0" encoding="utf-8"?> <ClassMain xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Prop1>Prop1</Prop1> <Prop2>Prop2</Prop2> </ClassMain> As you see, is lacking the ClassA properties. Can anyone help me out? Regards

    Read the article

  • ObservableCollection is not updating Multibinding in C# WPF

    - by Decept
    I have a TreeView that creates all its items from databound ObservableCollections. I have a hierarchy of GameNode objects, each object has two ObservableCollections. One collections has EntityAttrib objects and the other have GameNode objects. You could say that the GameNode object represents folders and EntityAttrib represents files. To display both attrib and GameNodes in the same TreeView I use Multibinding. This all works fine in startup, but when I add a new GameNode somewhere in the hierarchy the TreeView is not updated. I set a breakpoint in my converter method but it's not called when adding a new GameNode. It seems that the ObservableCollection is not notifying the MultiBinding of the change. If I comment out the MultiBinding and only bind the GameNode collection it works as expected. XAML: <HierarchicalDataTemplate DataType="{x:Type local:GameNode}"> <HierarchicalDataTemplate.ItemsSource> <MultiBinding Converter="{StaticResource combineConverter}"> <Binding Path="Attributes" /> <Binding Path="ChildNodes" /> </MultiBinding> </HierarchicalDataTemplate.ItemsSource> <TextBlock Text="{Binding Path=Name}" ContextMenu="{StaticResource EntityCtxMenu}"/> </HierarchicalDataTemplate> C#: public class GameNode { string mName; public string Name { get { return mName; } set { mName = value; } } GameNodeList mChildNodes = new GameNodeList(); public GameNodeList ChildNodes { get { return mChildNodes; } set { mChildNodes = value; } } ObservableCollection<EntityAttrib> mAttributes = new ObservableCollection<EntityAttrib>(); public ObservableCollection<EntityAttrib> Attributes { get { return mAttributes; } set { mAttributes = value; } } } GameNodeList is a subclassed ObservableCollection

    Read the article

  • Cannot get Correct month for a call from call log history

    - by Nishant Kumar
    I am trying to extract information from the call log of the android. I am getting the call date that is one month back from the actual time of call. I mean to say that the information extracted by my code for the date of call is one mont back than the actual call date. I have the following in the Emulator: I saved a contact. Then I made a call to the contact. Code: I have 3 ways of extracting call Date information but getting the same wrong result. My code is as follows: /* Make the query to call log content */ Cursor callLogResult = context.getContentResolver().query( CallLog.Calls.CONTENT_URI, null, null, null, null); int columnIndex = callLogResult.getColumnIndex(Calls.DATE); Long timeInResult = callLogResult.getLong(columnIndex); /* Method 1 to change the milliseconds obtained to the readable date formate */ Time time = new Time(); time.toMillis(true); time.set(timeInResult); String callDate= time.monthDay+"-"+time.month+"-"+time.year; /* Method 2 for extracting the date from tha value read from the column */ Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(time); String Month = calendar.get(Calendar.MONTH) ; /* Method 3 for extracting date from the result obtained */ Date date = new Date(timeInResult); String mont = date.getMonth() While using the Calendar method , I also tried to set the DayLight SAving Offset but it didnot worked, calendar.setTimeZone(TimeZone.getTimeZone("Europe/Paris")); int DST_OFFSET = calendar.get( Calendar.DST_OFFSET ); // DST_OFFSET Boolean isSet = calendar.getTimeZone().useDaylightTime(); if(isSet) calendar.set(Calendar.DST_OFFSET , 0); int reCheck = calendar.get(Calendar.DST_OFFSET ); But the value is not set to 0 in recheck. I am getting the wrong month value by using this also. Please some one help me where I am wrong? or is this the error in emulator ?? Thanks, Nishant Kumar Engineering Student

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

  • Why doesn't TransactionScope work with Entity Framework?

    - by NotDan
    See the code below. If I initialize more than one entity context, then I get the following exception on the 2nd set of code only. If I comment out the second set it works. {"The underlying provider failed on Open."} Inner: {"Communication with the underlying transaction manager has failed."} Inner: {"Error HRESULT E_FAIL has been returned from a call to a COM component."} Note that this is a sample app and I know it doesn't make sense to create 2 contexts in a row. However, the production code does have reason to create multiple contexts in the same TransactionScope, and this cannot be changed. Edit Here is a previous question of me trying to set up MS-DTC. It seems to be enabled on both the server and the client. I'm not sure if it is set up correctly. Also note that one of the reasons I am trying to do this, is that existing code within the TransactionScope uses ADO.NET and Linq 2 Sql... I would like those to use the same transaction also. (That probably sounds crazy, but I need to make it work if possible). http://stackoverflow.com/questions/794364/how-do-i-use-transactionscope-in-c Solution Windows Firewall was blocking the connections to MS-DTC. using(TransactionScope ts = new System.Transactions.TransactionScope()) { using (DatabaseEntityModel o = new DatabaseEntityModel()) { var v = (from s in o.Advertiser select s).First(); v.AcceptableLength = 1; o.SaveChanges(); } //-> By commenting out this section, it works using (DatabaseEntityModel o = new DatabaseEntityModel()) { //Exception on this next line var v = (from s1 in o.Advertiser select s1).First(); v.AcceptableLength = 1; o.SaveChanges(); } //-> ts.Complete(); }

    Read the article

  • asp.net ajax control toolkit combobox displays incorectly when in fieldset with style of position:re

    - by Jon P
    I currently have an Instance of the ASP.net ajax control toolkit combo box residing in a field set with a style of position:releative applied. The control also sits in a very plain table (please no comments about using tables for lay-out, I know it is evil and try to avoid it). There are two problems with the display of the list: The list does not sit flush with the text box. In I.E. 7 (which is the majority of my target audience, intranet where IE7 is the company standard) the list display about 10px below the fieldset, which is what the bottom margin of the fieldset is set to. In FF 2.0 the list sits sinificantly lower and off-set to the right. Below the filed set there is more content in a div, also with a style of position:relative applied. The list from the combo box displays behind the content of this div, which is obviouly an issue. Removing position:releative from the fieldset resolves the display issue of the combo box, but results in other unwanted display side effects. My interim workaround is to specifically restyle this fieldset without the position:absolute style, but I'm hoping for a better solution. Thanks

    Read the article

  • Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

    - by FullOfQuestions
    Hi, We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2. As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing. Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers). After reviewing the Biztalk Documentation, this is what I know so far: For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group. MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here. SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other). This is where I'm completely puzzled and I desperately need your help: Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers? If yes, then please tell me how. If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive! Your help is greatly appreciated, M

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >