Search Results

Search found 75233 results on 3010 pages for 'data distribution service'.

Page 501/3010 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Application throws NotSerializableException when run on an jboss cluster

    - by Kalpana
    Environment: JBoss 5.1.0, JBoss Seam 2.2.0 While trying to get my application running in a clustered environment after login I am getting the following exception. Post login we try to store the currentUser in jboss seam session context. java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty How to resolve this? java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.ArrayList.writeObject(ArrayList.java:570) at sun.reflect.GeneratedMethodAccessor339.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor338.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.serialize (SimpleCachableMarshalledValue.java:271) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.writeExte rnal(SimpleCachableMarshalledValue.java:252) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java: 1421) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1390) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:460) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallMap(CacheMarshall er200.java:569) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:370) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:191) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:136) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:182) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:52) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:369) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619) 16:38:35,789 ERROR [CommandAwareRpcDispatcher] java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty 16:38:35,789 WARN [/a12] Failed to replicate session YwBL69cG-zdm0m5CvzNj3Q__ java.lang.RuntimeException: Failure to marshal argument(s) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:374) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • SqlBuildTask failed due to ArgumentNullException(searchingPaths)

    At one of my customers, they have setup TFS 2010. They are using the UpgradeTemplate.xaml to build all their solutions, including GDR2 database projects. When building the project, I got the following error message DspBuild:   Creating a model to represent the project... C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: The "SqlBuildTask" task failed unexpectedly. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: System.ArgumentNullException: Value cannot be null. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: Parameter name: searchingPaths [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionAssemblyResolver..ctor(List`1 searchingPaths) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionTypeLoader.LoadTypes() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionManager..ctor(String databaseSchemaProviderType) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.LoadImpl(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.Load(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.DBBuildTask.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] Solution To solve this error you set the MSBuild Platform in the Build Defintion to X86:

    Read the article

  • Should the networking of my game be a component or a service?

    - by aalcutt
    I am working on a windows game and I am trying to understand the XNA GameComponents and GameServices classes and use. From what I understand about a component is that it has an Update method that gets call in every frame, and a service can be referenced from other components if needed. So the way I think a network component would work is that in its Update method it would receive and send data. It probably makes sense to receive the network data once per frame, but it doesn't for sending it. Shouldn't the game send its own updates to others the moment it has it to cut down on lag?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • jqGrid dynamic select option

    - by Jo
    I'm creating a jqgrid with drop down columns and I'm using cell editing. I need the options of the drop down columns to change dynamically and I've tried implementing this by setting the column to be: { name: "AccountLookup", index: "AccountLookup", width: 90, editable: true, resizable: true, edittype: "select", formatter: "select" }, and then in the beforeCellEdit event I have: beforeEditCell: function(id, name, val, iRow, iCol) { if(name=='AccountLookup') { var listdata = GetLookupValues(id, name); if (listdata == null) listdata = "1:1"; jQuery("#grid").setColProp(name, { editoptions: { value: listdata.toString()} }) } }, GetLookupValues just returns a string in the format "1:One;2:Two" etc. That works fine however the options are populated one click behind - ie i click on AccountID in row 1, and the dropdown is empty, however when I then click on AccountID in row 3 the options I set in the row 1 click are shown in the row 3 click. And so on. So always one click behind. Is there another way of achieving what I need? Bacially the dropdown options displayed are always changing and I need to load them as user enters the cell for editing. Perhaps I can somehow get at the select control in the beforeEditCell event and manually enter its values instead of using the setColProp call? If so could I get an example of doing that please? Another thing - if the dropdown is empty and a user doesn't cancel the cell edit, the grid script throws an error. I'm using clientarray editing if that makes a difference. Greatly appreciate any help. Regards, Jo

    Read the article

  • Find kth smallest element in a binary search tree in Optimum way

    - by Bragaadeesh
    Hi, I need to find the kth smallest element in the binary search tree without using any static/global variable. How to achieve it efficiently? The solution that I have in my mind is doing the operation in O(n), the worst case since I am planning to do an inorder traversal of the entire tree. But deep down I feel that I am not using the BST property here. Is my assumptive solution correct or is there a better one available ?

    Read the article

  • NSFetchedResultsController crashing on performFetch: when using a cache

    - by Oliver
    I make use of NSFetchedResultsController to display a bunch of objects, which are sectioned using dates. On a fresh install, it all works perfectly and the objects are displayed in the table view. However, it seems that when the app is relaunched I get a crash. I specify a cache when initialising the NSFetchedResultsController, and when I don't it works perfectly. Here is how I create my NSFetchedResultsController: - (NSFetchedResultsController *)results { // If we are not nil, stop here if (results != nil) return results; // Create the fetch request, entity and sort descriptors NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:self.managedObjectContext]; NSSortDescriptor *descriptor = [[NSSortDescriptor alloc] initWithKey:@"utc_start" ascending:YES]; NSArray *descriptors = [[NSArray alloc] initWithObjects:descriptor, nil]; // Set properties on the fetch [fetch setEntity:entity]; [fetch setSortDescriptors:descriptors]; // Create a fresh fetched results controller NSFetchedResultsController *fetched = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.managedObjectContext sectionNameKeyPath:@"day" cacheName:@"Events"]; fetched.delegate = self; self.results = fetched; // Release objects and return our controller [fetched release]; [fetch release]; [descriptor release]; [descriptors release]; return results; } These are the messages I get when the app crashes: FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'FATAL ERROR: The persistent cache of section information does not match the current configuration. You have illegally mutated the NSFetchedResultsController's fetch request, its predicate, or its sort descriptor without either disabling caching or using +deleteCacheWithName:' I really have no clue as to why it's saying that, as I don't believe I'm doing anything special that would cause this. The only potential issue is the section header (day), which I construct like this when creating a new object: // Set the new format [formatter setDateFormat:@"dd MMMM"]; // Set the day of the event [event setValue:[formatter stringFromDate:[event valueForKey:@"utc_start"]] forKey:@"day"]; Like I mentioned, all of this works fine if there is no cache involved. Any help appreciated!

    Read the article

  • Access to bound data in IMultiValueConverter.ConvertBack() in C#/WPF

    - by absence
    I have a problem with a multibinding: <Canvas> <local:SPoint x:Name="sp" Width="10" Height="10"> <Canvas.Left><!-- irrelevant binding here --></Canvas.Left> <Canvas.Top> <MultiBinding Converter="{StaticResource myConverter}" Mode="TwoWay"> <Binding ElementName="cp1" Path="(Canvas.Top)"/> <Binding ElementName="cp1" Path="Height"/> <Binding ElementName="cp2" Path="(Canvas.Top)"/> <Binding ElementName="cp2" Path="Height"/> <Binding ElementName="sp" Path="Height"/> <Binding ElementName="sp" Path="Slope" Mode="TwoWay"/> </MultiBinding> </Canvas.Top> </local:SPoint> <local:CPoint x:Name="cp1" Width="10" Height="10" Canvas.Left="20" Canvas.Top="150"/> <local:CPoint x:Name="cp2" Width="10" Height="10" Canvas.Left="100" Canvas.Top="20"/> </Canvas> In order to calculate the Canvas.Top value, myConverter needs all the bound values. This works great in Convert(). Going the other way, myConverter should ideally calculate the Slope value (Binding.DoNothing for the rest), but it needs the other values in addition to the Canvas.Top one passed to ConvertBack(). What is the right way to solve this? I have tried making the binding OneWay and create an additional multibinding for local:SPoint.Slope, but that causes infinite recursion and stack overflow. I was thinking the ConverterParameter could be used, but it seems like it's not possible to bind to it.

    Read the article

  • RouteTable.Routes.GetVirtualPath with route data asp.net MVC 2

    - by Bill
    Dear all, I'm trying to get a URL from my routes table. Here is the method. private static void RedirectToRoute(ActionExecutingContext context, string param) { var actionName = context.ActionDescriptor.ActionName; var controllerName = context.ActionDescriptor.ControllerDescriptor.ControllerName; var rc = new RequestContext(context.HttpContext, context.RouteData); string url = RouteTable.Routes.GetVirtualPath(rc, new RouteValueDictionary(new { actionName = actionName, controller = controllerName, parameter = param })).VirtualPath; context.HttpContext.Response.Redirect(url, true); } I'm trying to map it to. However RouteTable.Routes.GetVirtualPath(rc, new RouteValueDictionary(new { actionName = actionName, controller = controllerName, parameter = param })) keeps giving me null. Any thoughts? routes.MapRoute( "default3", // Route name "{parameter}/{controller}/{action}", // URL with parameters new { parameter= "parameterValue", controller = "Home", action = "Index" } ); I know I can use redirectToAction and other methods, but I would like to change the URL in the browser with new routedata.

    Read the article

  • MS Chart with ASP.NET chart type "column" not showing axis x label if there are more than 9 bar in t

    - by Bayonian
    Hi, I'm having problem with MS Chart chart type column. If there are only 9 bar in the chart like the following picture, then the axis-x label show up properly. However, there are more than 9 bars bar the chart, the axis-x label wont show up properly, some of them just dissappear. Here's my mark-up for the chart: <asp:Chart ID="chtNBAChampionships" runat="server"> <Series> <asp:Series Name="Championships" YValueType="Int32" Palette="Berry" ChartType="Column" ChartArea="MainChartArea" IsValueShownAsLabel="true"> <Points> <asp:DataPoint AxisLabel="Celtics" YValues="17" /> <asp:DataPoint AxisLabel="Lakers" YValues="15" /> <asp:DataPoint AxisLabel="Bulls" YValues="6" /> <asp:DataPoint AxisLabel="Spurs" YValues="4" /> <asp:DataPoint AxisLabel="76ers" YValues="3" /> <asp:DataPoint AxisLabel="Pistons" YValues="3" /> <asp:DataPoint AxisLabel="Warriors" YValues="3" /> <asp:DataPoint AxisLabel="Mara" YValues="4" /> <asp:DataPoint AxisLabel="Saza" YValues="9" /> <asp:DataPoint AxisLabel="Buha" YValues="6" /> </Points> </asp:Series> </Series> <ChartAreas> <asp:ChartArea Name="MainChartArea"> </asp:ChartArea> </ChartAreas> </asp:Chart> I don't know it works with only 9 bars? Is there any way to make the chart work properly? Also, if possible, how to make each bar have different color. Thank you.

    Read the article

  • Edit/Access data from a CheckBox column in an ASPX:GridView - c#

    - by Endo
    Hi, I have a GridView to which I bind a dataTable I manually create. Both the GridView and the dataTable contain 2 columns, Name and isBusy. My GridView looks like this <Columns> <asp:BoundField HeaderText="Name" DataField="Name" SortExpression="Name"> </asp:BoundField> <asp:CheckBoxField DataField="isBusy" HeaderText="Busy" SortExpression="isBusy" /> </Columns> That works fine, except that the Busy column is non-editable unless you set a specific row to edit mode. I require the entire column of checkboxes to be checkable. So I converted the column to a template, and so the columns look like this: <Columns> <asp:BoundField HeaderText="Name" DataField="Name" SortExpression="Name"> </asp:BoundField> <asp:TemplateField HeaderText="Busy" SortExpression="isBusy"> <ItemTemplate> <asp:CheckBox ID="isBusy" runat="server" Checked='<%# Eval("isBusy") %>' oncheckedchanged="CheckBoxBusy_CheckedChanged" /> </ItemTemplate> </asp:TemplateField> </Columns> Now, this throws an error at runtime, saying System.InvalidCastException was unhandled by user code Message="Specified cast is not valid." Source="App_Web_zzjsqlrr" StackTrace: at ASP.proyectos_aspx.__DataBinding__control24(Object sender, EventArgs e) in c:\Proyect\Users.aspx:line 189 at System.Web.UI.Control.OnDataBinding(EventArgs e) at System.Web.UI.Control.DataBind(Boolean raiseOnDataBinding) at System.Web.UI.Control.DataBind() at System.Web.UI.Control.DataBindChildren() InnerException: Any idea why this is happening? The next step I would need is to know how to set and get a checkbox's state (haven't been able to find how to manually check a checkbox). I appreciate very much any help.

    Read the article

  • Getting data from UITableView

    - by Tejaswi Yerukalapudi
    Hi, I have a few custom UITableViewCells - http://img11.imageshack.us/i/customfacilitiescell.png/ which are added to this UIViewController - http://img189.imageshack.us/i/facilitycontroller.png/ Now, on clicking a button in the controller, I'd like to get the on/off status of all the UISwitches in the controller. Thanks, Teja Edit: I've made a few edits, but I still can't figure out how to do this. My program structure currently - A CustomCell.xib that looks like this - http://img11.imageshack.us/i/customfacilitiescell.png/ A CustomCellController that a subclass of UITableViewCell that has the IBOutlets for the labels and switches from above. Now I have a UIViewController<UITableViewDataSource, UITableViewDelegate> (Say, Screen1Controller) which looks like - http://img189.imageshack.us/i/facilitycontroller.png/ The tableviewcell is being created like this - - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *Id= @"CustomFacilitiesCell"; CustomFacilitiesCellController *cell = (CustomFacilitiesCellController *)[tableView dequeueReusableCellWithIdentifier:Id]; if(cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CustomFacilitiesCell" owner:self options:nil]; for (id oneObject in nib) { if([oneObject isKindOfClass:[CustomFacilitiesCellController class]]) cell = (CustomFacilitiesCellController *) oneObject; } } NSUInteger row = [indexPath row]; CustomFacilitiesCellController *rowData = (CustomFacilitiesCellController *)[self.facilities objectAtIndex:row]; cell.facname.text = rowData.facname.text; cell.FacID.text = rowData.FacID.text; cell.facSwitch = [(CustomFacilitiesCellController *)rowData facSwitch]; UISwitch *temp = cell.facSwitch; [(UISwitch *)[cell facSwitch] addTarget:self action:@selector(facSwitchOptionChanged:) forControlEvents:UIControlEventValueChanged]; cell.facSwitch.on = NO; //cell.facSwitch.enabled = FALSE; cell.accessoryType = UITableViewCellAccessoryNone; return cell; } - (IBAction) facSwitchOptionChanged:(id) sender { int i=0; } In particular, my problem is that the facSwitchOptionChanged() isn't getting called. Thanks again for the help, Teja.

    Read the article

  • Storing Data from both POST variables and GET parameters

    - by Ali
    I want my python script to simultaneously accept POST variables and query string variables from the web address. The script has code : form = cgi.FieldStorage() print form However, this only captures the post variables and no query variables from the web address. Is there a way to do this? Thanks, Ali

    Read the article

  • [C#] WebClient - Upload data, get stream

    - by Barguast
    I have a situation where I want to asynchronously write a series of bytes with WebClient (in much the same way as UploadDataAsync) and get a readable response stream (in the same way as OpenReadAsync). You seem to be able to do the two individually, but not both of them together. Is there a way? Thanks!

    Read the article

  • Advice on database design / SQL for retrieving data with chronological order

    - by Remnant
    I am creating a database that will help keep track of which employees have been on a certain training course. I would like to get some guidance on the best way to design the database. Specifically, each employee must attend the training course each year and my database needs to keep a history of all the dates on which they have attend the course in the past. The end user will use the software as a planning tool to help them book future course dates for employees. When they select a given employee they will see: (a) Last attendance date (b) Projected future attendance date(i.e. last attendance date + 1 calendar year) In terms of my database, any given employee may have multiple past course attendance dates: EmpName AttandanceDate Joe Bloggs 1st Jan 2007 Joe Bloggs 4th Jan 2008 Joe Bloggs 3rd Jan 2009 Joe Bloggs 8th Jan 2010 My question is what is the best way to set up the database to make it easy to retrieve the most recent course attendance date? In the example above, the most recent would be 8th Jan 2010. Is there a good way to use SQL to sort by date and pick the MAX date? My other idea was to add a column called ‘MostRecent’ and just set this to TRUE. EmpName AttandanceDate MostRecent Joe Bloggs 1st Jan 2007 False Joe Bloggs 4th Jan 2008 False Joe Bloggs 3rd Jan 2009 False Joe Bloggs 8th Jan 2010 True I wondered if this would simplify the SQL i.e. SELECT Joe Bloggs WHERE MostRecent = ‘TRUE’ Also, when the user updates a given employee’s attendance record (i.e. with latest attendance date) I could use SQL to: Search for the employee and set the MostRecent value to FALSE Add a new record with MostRecent set to TRUE? Would anybody recommended either method over the other? Or do you have a completely different way of solving this problem?

    Read the article

  • Problem with cascade delete using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? If I change this value to Cascade everything works properly, but I would rather not rely on this manual change because what if I refresh my model from the DB and it looses that. Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • jsTree async with preloaded data.

    - by Paul Knopf
    I am trying to make a tree view be async. When the page is rendered, there is default tree items displayed. jsTree tries to reload the root anyway. I want the page to render (with jsTree init'ed) with default items rendered from browser, not the ajax call. Then we the user tries to go deeper, thats when I want to do do the ajax calls. Any help is appreciated. Thanks!

    Read the article

  • Setting a Visual State from a data bound enum in WPF

    - by firoso
    Hey all, I've got a scenario where I want to switch the visiblity of 4 different content controls. The visual states I have set opacity, and collapsed based on each given state (See code.) What I'd like to do is have the visual state bound to a property of my View Model of type Enum. I tried using DataStateBehavior, but it requires true/false, which doesn't work for me. So I tried DataStateSwitchBehavior, which seems to be totally broken for WPF4 from what I could tell. Is there a better way to be doing this? I'm really open to different approaches if need be, but I'd really like to keep this enum in the equation. Edit: The code shouldn't be too important, I just need to know if there's a well known solution to this problem. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:Custom="http://schemas.microsoft.com/expression/2010/interactivity" xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions" xmlns:ee="http://schemas.microsoft.com/expression/2010/effects" xmlns:customBehaviors="clr-namespace:SEL.MfgTestDev.ESS.Behaviors" x:Class="SEL.MfgTestDev.ESS.View.PresenterControl" mc:Ignorable="d" d:DesignHeight="624" d:DesignWidth="1104" d:DataContext="{Binding ApplicationViewModel, Mode=OneWay, Source={StaticResource Locator}}"> <Grid> <Grid.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Layout/TerminalViewTemplate.xaml"/> <ResourceDictionary Source="Layout/DebugViewTemplate.xaml"/> <ResourceDictionary Source="Layout/ProgressViewTemplate.xaml"/> <ResourceDictionary Source="Layout/LoadoutViewTemplate.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Grid.Resources> <Custom:Interaction.Behaviors> <customBehaviors:DataStateSwitchBehavior Binding="{Binding ApplicationViewState}"> <customBehaviors:DataStateSwitchCase State="LoadoutState" Value="Loadout"/> </customBehaviors:DataStateSwitchBehavior> </Custom:Interaction.Behaviors> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="ApplicationStates" ei:ExtendedVisualStateManager.UseFluidLayout="True"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0:0:1"> <VisualTransition.GeneratedEasingFunction> <SineEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <ei:ExtendedVisualStateManager.TransitionEffect> <ee:SmoothSwirlGridTransitionEffect/> </ei:ExtendedVisualStateManager.TransitionEffect> </VisualTransition> </VisualStateGroup.Transitions> <VisualState x:Name="LoadoutState"> <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="LoadoutPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="LoadoutPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="ProgressState"> <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="ProgressPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="ProgressPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="DebugState"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="DebugPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="DebugPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="TerminalState"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="TerminalPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="TerminalPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <ContentControl x:Name="LoadoutPage" ContentTemplate="{StaticResource LoadoutViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="ProgressPage" ContentTemplate="{StaticResource ProgressViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="DebugPage" ContentTemplate="{StaticResource DebugViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="TerminalPage" ContentTemplate="{StaticResource TerminalViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <TextBlock HorizontalAlignment="Left" TextWrapping="Wrap" VerticalAlignment="Top" Text="{Binding ApplicationViewState}"> <TextBlock.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="Black" Offset="0"/> <GradientStop Color="White" Offset="1"/> </LinearGradientBrush> </TextBlock.Background> </TextBlock> </Grid>

    Read the article

  • asp.net MVC DisplayTemplates and EditorTemplate naming convention

    - by Simon G
    Hi, I've got a couple of questions about the naming convention for the DisplayTemplates and EditorTemplates in MVC 2. If for example I have a customer object with a child list of account how do I: Create a display template for the list of accounts, what is the file called? When I'm doing a foreach( var c in Model.Accounts ) how do I call a display temple while in the foreach loop? When I do Html.DisplayFor( x => x ) inside the foreach x is the model and not in this case c. Thanks in advance.

    Read the article

  • WPF - DataGrid Column's ToolTip visibility based on the column's data length

    - by S.C.Vidhya
    In my application, i have tried to implement the visibility of tooltip based on the dataGrid Column's text length by using a converter. I am facing some problems in displaying the toolTip text. In the ToolTip, TextBlock's text binding is not working. If its binded with some hard coded strings, it works fine. Here below is the code that i have added for the grid column... <Custom:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Text}"> <TextBlock.ToolTip> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}" Visibility="{Binding Converter={StaticResource ToolTipVis}}"> <TextBlock Text="{Binding Text}"> </ToolTip> </TextBlock.ToolTip> </TextBlock> </DataTemplate> </Custom:DataGridTemplateColumn.CellTemplate>

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >