Search Results

Search found 20289 results on 812 pages for 'service locator'.

Page 212/812 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • How to setup users for desktop app with SQL Azure as backend?

    - by Manuel
    I'm considering SQL Azure as DB for a new application I'm developing. The reason I want to go with Azure is because I don't want to have to maintain yet another database(s) and I want my users to be able to access the data from anywhere. The problem is that I'm not clear regarding how to users will connect. The application is a basic CRUD type of windows app. I've read that you need to add your IP to SQL Azure's firewall to connect to it, but I don't know if it's only for administration purposes only. Can anyone clarify if anyone (anywhere) can access the data with the proper credentials? Which of the following scenarios would work best (if at all)? A) Add each user to SQL Azure and have the app connect directly to Azure as if it was connecting to SQL Server B) Add an anonymous user SQL Azure and pass the real user's password/hash with every call so the Azure database can service the requests accordingly. C) Put a WCF service in between so that it handles the authentication stuff. The service will only serve the appropriate information to the user given his/her authentication and SQL Azure would be open to the service exclusively. D) - ideas are welcomed - This is confusing because all Azure examples I see are for websites. I have a hard time believing SQL Azure wouldn't handle the case of desktop apps connecting to it. So what's the best practice?

    Read the article

  • Send request body data when running siege

    - by qui
    I am trying to use the command line utility Siege to load test a service. The service recieves json in the request body via a POST. I have a file called example-data.json with the json inside. I will eventually turn this into a tiny service which creates random json for testing, but this should do for now I have another file called hit-qa.siege with http://www.qa-url.com POST < example-data.json and i try and run siege -c10 -d1 -r1 -f ops/perf/hammer-dev.siege When I check the logs of the service, it is not recieving anything in the request body. My googles have been fruitless, does anyone know how to accomplish this?

    Read the article

  • Fedora 17 Hangup on Boot (plymouth-quit-wait)

    - by Joe
    I am having an issue after several updates where when I try to boot the boot animation "loads" then flashes with the fedora logo, and then sits there. After checking into it more I found that two services were failing to start. The first is tcsd.service and the second is plymouth-quit-wait.service. I was able to to disable tcsd.service (in the hopes I could boot without it), however I have been unable to do anything to the second service. I am running FC17 and the akmod nvidia drivers, on an ASUSG53SW. Everything is up to date as far as I know. What is the exact problem that I am facing and how can I go about troubleshooting this or fixing it?

    Read the article

  • How to schedule a task X minutes after Windows Server 2003 starts?

    - by Joe Schmoe
    How to schedule a task X minutes after Windows Server 2003 starts? In "Scheduled Tasks" one can specify "When my computer starts" but I see no way to specify delay. What I am trying to achieve: there is a service (JIRA) that even though dependent on SQL Server service still doesn't wait long enough for SQL Server to become fully operational. So JIRA service fails to connect to the database and needs to be restarted manually after each server reboot. My plan is to add "SC stop" and "SC start" commands for JIRA service 3 minutes after server starts.

    Read the article

  • Cache that always returns immediate response?

    - by Col Wilson
    I have a web service that takes a while to build a response despite being tuned as best I can. What I'd like is some sort of cache sitting in front of the service which would always return the last known value from the service, but at the same time pass the request back to the service to build an up to date response for the next request. I'm aware of the limitations that this puts on the freshness of the data, but you can assume that I'm happy to live with that. The technologies I'm using at present are python uwsgi via nginx, but that need not be a limit to any solution you might suggest. Col

    Read the article

  • SharePoint 3.0 Search Stuck Starting

    - by Skaughty
    I have a single server running SharePoint 3.0 on WS2003 Sp2 not on a domain. My search service stopped working so I just went in to central admin and hit stop, then hit start. I soon found out this is not the way to do this process. I now can not get the search service to start again. I have tried everything I can think of, and nothing works it changes status to Starting but never gets to started. I have to go in with "stsadm -o spsearch -action stop" from the command prompt to stop it. I am using the Local service account for the service and the content accounts. Can anyone help me?

    Read the article

  • Tomcat Apache not working on Win7

    - by Javanator
    I have installed Tomcat Apache 6.0 onto my system after downloading it from http://tomcat.apache.org/download-55.cgi using the Windows installer I install the system. But after successful installation the service doesn't start. Stating an error like follows: Windows could not start the Apache tomcat on Local Computer, For More Information, review the System Event Log. If this is a non microsoft service, contact the service vendor, and refer to the service specific error code 0 I then try to install the same installer on Win XP and it worked. Don't know how to resolve this problem and how to get it started on Win7. Any help would be appreciated. Thank you.

    Read the article

  • Windows Server 2003 R2 SIS Groveler start error

    - by 2040techman
    Hi I have a Windows Server 2003 R2 SP2 running and I need to have a WDS with mixed mode so I can deploy Windows 7 & Windows XP images. I went to setup the first image and after the wizard was done copying the correct files I got: Single Instance Storage Groveler: Service did not start (or something like that). I checked event viewer and got this: Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 3/26/2012 Time: 7:03:21 PM User: N/A Computer: SERVER2 Description: The Single Instance Storage Groveler service terminated with the following error: The service has not been started. This server is needed very soon because we just got a shipment of several Windows XP capable laptops and need to deploy Windows XP over PXE and this is keeping us from proceeding. Any help is great!!!

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • Call Webservice without adding a WebReference - with Complex Types

    - by ck
    I'm using the code at This Site to call a webservice dynamically. [SecurityPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } This works fine when I use built in types, but for my own classes, I get this: Event Type: Error Event Source: TDX Queue Service Event Category: None Event ID: 0 Date: 12/04/2010 Time: 12:12:38 User: N/A Computer: TDXRMISDEV01 Description: System.ArgumentException: Object of type 'TDXDataTypes.AgencyOutput' cannot be converted to type 'AgencyOutput'. Server stack trace: at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.MethodBase.CheckArguments(Object[] parameters, Binder binder, BindingFlags invokeAttr, CultureInfo culture, Signature sig) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at TDXQueueEngine.GenericWebserviceProxy.CallWebService(String webServiceAsmxUrl, String serviceName, String methodName, Object[] args) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\GenericWebserviceProxy.cs:line 76 at TDXQueueEngine.TDXQueueWebserviceItem.Run() in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueueWebserviceItem.cs:line 99 at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at TDXQueueEngine.TDXQueue.RunProcess.EndInvoke(IAsyncResult result) at TDXQueueEngine.TDXQueue.processComplete(IAsyncResult ar) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueue.cs:line 130 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The classes reference the same assembly and the same version. Do I need to include my assembly as a reference when building the temporary assembly? If so, how? Thanks.

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

  • Issue with Silverlight interop with JBoss WebService

    - by Hadi Eskandari
    I have a simple JAXWS webservice deployed in JBoss. It runs fine with a java client but I'm trying to connect using a Silverlight 3.0 application. I've changed the webservice to use Soap 1.1: @BindingType(value = "http://schemas.xmlsoap.org/wsdl/soap/http") public class UserSessionBean implements UserSessionRemote { ... } I'm using BasicHttpBinding on the Silverlight client. There are two issues: 1- When I connect from VisualStudio (2008 and 2010) to create the webservice proxies, the following exception is thrown, but the proxy is generated successfully. This also happens when I try to update the existing web service reference (but it also updates fine). com.sun.xml.ws.server.UnsupportedMediaException: Unsupported Content-Type: application/soap+xml; charset=utf-8 Supported ones are: [text/xml] at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:291) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:128) at com.sun.xml.ws.encoding.SOAPBindingCodec.decode(SOAPBindingCodec.java:287) at com.sun.xml.ws.transport.http.HttpAdapter.decodePacket(HttpAdapter.java:276) at com.sun.xml.ws.transport.http.HttpAdapter.access$500(HttpAdapter.java:93) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:432) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at org.jboss.wsf.stack.metro.RequestHandlerImpl.doPost(RequestHandlerImpl.java:225) at org.jboss.wsf.stack.metro.RequestHandlerImpl.handleHttpRequest(RequestHandlerImpl.java:82) at org.jboss.wsf.common.servlet.AbstractEndpointServlet.service(AbstractEndpointServlet.java:85) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:619) 2- When I use the proxy to fetch some data from the webservice (even methods with primitive types), I get the following error on Silverlight client: "An error occurred while trying to make a request to URI 'http://localhost:9090/admintool/UserSessionEJB'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." Setting a breakpoint on my java code, I can see that it is not hit when I run the silverlight client, so it is probably a cross domain issue, but I'm not sure how to handle it (I've already created a crossdomain.xml file and put it beside my HTML page hosting the silverlight client). I appreciate any help!

    Read the article

  • I finished coding my program. What's next? what are the steps I should take that would enable me to

    - by Luay
    Hi, I finished developing a program and would like to sell it online. However, I am not really sure of what to do next. Here is my current plan (in-order): 1- Add a 'deployment' project (i'm using visual studio) to my project so I can create a setup file for my program. 2- Use visual studio 'testing' add ons to test the program. I have no idea how to do it, but will teach myself. 3- When all is done, install the program on my wife's, parents and in-laws computer to further test it under different environments. 3- Setup a small Ltd. company. ( I might start this earlier as it might take a few weeks to complete). 4- Build / finish building a website (actually I started on this step a few weeks ago but haven't devoted enough time for it to complete yet). 5- Find a software / service to protect my program from piracy. I know it is impossible to protect the program from piracy. I understand that fully. But I still want to implement some sort of solution that wouldn't harm or deter honest customers. 6- Set up a business paypal account. 7- Find a software / service to handle payments on my website 8- Find a software / service to handle issuing license codes 9- Set up all of the above and go 'live'. However, I have zero experience in this sort of thing and would like your guidance on some points. 1- Is it necessary to setup a company? Can I sell as an individual? Will selling as an individual deter persons or companies from buying my software? 2- What is a decent choice for as a software / service to protect my program from my piracy. I have done a quick search and found something called Quick License Manager by Interactive Studios. Is this the sort of thing I am looking for? Is it any good? At which stage do I use or implement their service (or any similar service you might point me too? I guess I am really asking: how does this work? Would they give me a file I just add to my project, rebuild it and that's it? 3- What should I implement to handle payments online and how complicated is it? could you point me to any such services, please? 4- What should I implement to handle issuing license codes and how will that software / service coordinate with the software / service that handles the anti-piracy stuff? could you point me to such services, please? 5- In a different Stack Overflow question (by another user) someone suggested a cms system called Software Droid (http://www.softwaredroid.com). Is this what I am after. Will it help? 6- If the steps i'm following are incorrect in order or logic could you please advise me on what I should actually do? I guess these are question for those of you who have 'been there and done that'. So any guidance will be vary much appreciated. Many thanks in advance for your help.

    Read the article

  • Symfony2: validate an object that is not an entity

    - by Marronsuisse
    I am using CraueFormFlowBundle to have a multiple page form, and am trying to do some validation on some of the fields but can't figure out how to do this. The object that needs to be validated isn't an Entity, which is causing me trouble. I tried adding a collectionConstraint in the getDefaultOption function of my form type class, but this doesn't work as I get the "Expected argument of type array or Traversable and ArrayAccess" error. I tried with annotations in my object class, but they don't seem to be taken into account. Are annotations taken into account if the class isn't an entity? (i set enable_annotations to true) Anyway, what is the proper way to do this? Basically, I just want to validate that "age" is an integer... class PoemDataCollectorFormType extends AbstractType { public function buildForm(FormBuilder $builder, array $options) { switch ($options['flowStep']) { case 6: $builder->add('msgCategory', 'hidden', array( )); $builder->add('msgFIB','text', array( 'required' => false, )); $builder->add('age', 'integer', array( 'required' => false, )); break; } } public function getDefaultOptions(array $options) { $options = parent::getDefaultOptions($options); $options['flowStep'] = 1; $options['data_class'] = 'YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector'; $options['intention'] = 'my_secret_key'; return $options; } } EDIT: add code, handle validation with annotations As Cyprian, I was pretty sure that using annotations should work, however it doesn't... Here is how I try: In my Controller: public function collectPoemDataAction() { $collector = $this->get('yop.poem.datacollector'); $flow = $this->get('yop.form.flow.poemDataCollector'); $flow->bind($collector); $form = $flow->createForm($collector); if ($flow->isValid($form)) { .... } } In my PoemDataCollector class, which is my data class (service yop.poem.datacollector): class PoemDataCollector { /** * @Assert\Type(type="integer", message="Age should be a number") */ private $age; } EDIT2: Here is the services implementation: The data class (PoemDataCollector) seems to be linked to the flow class and not to the form.. Is that why there is no validation? <service id="yop.poem.datacollector" class="YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector"> </service> <service id="yop.form.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\Type\PoemDataCollectorFormType"> <tag name="form.type" alias="poemDataCollector" /> </service> <service id="yop.form.flow.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\PoemDataCollectorFlow" parent="craue.form.flow" scope="request"> <call method="setFormType"> <argument type="service" id="yop.form.poemDataCollector" /> </call> </service> How can I do the validation while respecting the craueFormFlowBundle guidelines? The guidelines state: Validation groups To validate the form data class a step-based validation group is passed to the form type. By default, if getName() of the form type returns registerUser, such a group is named flow_registerUser_step1 for the first step. Where should I state my constraint to use those validation groups..? I tried: YOP\YourOwnPoetBundle\PoemBuilder\Form\Type\PoemDataCollectorFormType: properties: name: - MinLength: { limit: 5, message: "Your name must have at least {{ limit }} characters.", groups: [flow_poemDataCollector_step1] } sex: - Type: type: integer message: Please input a number groups: [flow_poemDataCollector_step6] But it is not taken into acount.

    Read the article

  • WCF/REST Get image into picturebox?

    - by Garrith
    So I have wcf rest service which succesfuly runs from a console app, if I navigate to: http://localhost:8000/Service/picture/300/400 my image is displayed note the 300/400 sets the width and height of the image within the body of the html page. The code looks like this: namespace WcfServiceLibrary1 { [ServiceContract] public interface IReceiveData { [OperationContract] [WebInvoke(Method = "GET", BodyStyle = WebMessageBodyStyle.Wrapped, ResponseFormat = WebMessageFormat.Xml, UriTemplate = "picture/{width}/{height}")] Stream GetImage(string width, string height); } public class RawDataService : IReceiveData { public Stream GetImage(string width, string height) { int w, h; if (!Int32.TryParse(width, out w)) { w = 640; } // Handle error if (!Int32.TryParse(height, out h)) { h = 400; } Bitmap bitmap = new Bitmap(w, h); for (int i = 0; i < bitmap.Width; i++) { for (int j = 0; j < bitmap.Height; j++) { bitmap.SetPixel(i, j, (Math.Abs(i - j) < 2) ? Color.Blue : Color.Yellow); } } MemoryStream ms = new MemoryStream(); bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); ms.Position = 0; WebOperationContext.Current.OutgoingResponse.ContentType = "image/jpeg"; return ms; } } } What I want to do now is use a client application "my windows form app" and add that image into a picturebox. Im abit stuck as to how this can be achieved as I would like the width and height of the image from my wcf rest service to be set by the width and height of the picturebox. I have tryed this but on two of the lines have errors and im not even sure if it will work as the code for my wcf rest service seperates width and height with a "/" if you notice in the url. string uri = "http://localhost:8080/Service/picture"; private void button1_Click(object sender, EventArgs e) { StringBuilder sb = new StringBuilder(); sb.AppendLine("<picture>"); sb.AppendLine("<width>" + pictureBox1.Image.Width + "</width>"); // the url looks like this http://localhost:8080/Service/picture/300/400 when accessing the image so I am trying to set this here sb.AppendLine("<height>" + pictureBox1.Image.Height + "</height>"); sb.AppendLine("</picture>"); string picture = sb.ToString(); byte[] getimage = Encoding.UTF8.GetBytes(picture); // not sure this is right HttpWebRequest req = WebRequest.Create(uri); //cant convert webrequest to httpwebrequest req.Method = "GET"; req.ContentType = "image/jpg"; req.ContentLength = getimage.Length; MemoryStream reqStrm = req.GetRequestStream(); //cant convert IO stream to IO Memory stream reqStrm.Write(getimage, 0, getimage.Length); reqStrm.Close(); HttpWebResponse resp = req.GetResponse(); // cant convert web respone to httpwebresponse MessageBox.Show(resp.StatusDescription); pictureBox1.Image = Image.FromStream(reqStrm); reqStrm.Close(); resp.Close(); } So just wondering if some one could help me out with this futile attempt at adding a variable image size from my rest service to a picture box on button click. This is the host app aswell: namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string baseAddress = "http://" + Environment.MachineName + ":8000/Service"; ServiceHost host = new ServiceHost(typeof(RawDataService), new Uri(baseAddress)); host.AddServiceEndpoint(typeof(IReceiveData), new WebHttpBinding(), "").Behaviors.Add(new WebHttpBehavior()); host.Open(); Console.WriteLine("Host opened"); Console.ReadLine();

    Read the article

  • IIS7 + WCF + Silverlight problems

    - by Eanna
    Hey, I've been building a silverlight application and a WCF service for a while now and recently tried to host them in IIS7. I installed IIS7 on Windows Server 2008 R2 and added these two application to my default website. I am having a number of problems so im hoping one of you can help out... 1) The silverlight and WCF service applications do not work with pass-through authentication. I need to "connect as" the administrator server account when setting up the application. I read online that you should only need to use the "connect as" field when you are connecting to another computer. If i dont supply the admin credentials i get this error. Do i have to set up permissions somewhere else? HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \?\C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\web.config Requested URL http:://localhost:80/WCFService/Service.svc Physical Path C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\Service.svc Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: Links and More Information This error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. 2) Visual studio generated 2 webpages to run my silverlight application (.html and .aspx). When I am running the silverlight application (connected as admin) I can navigate to the .html page, no problem. When I try to open the .aspx file i get the following error Server Error in '/Platform' Application. Access is denied. Description: An error occurred while accessing the resources required to serve this request. You might not have permission to view the requested resources. Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the Web server's administrator to give you access to 'C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\Platform\Website\PlatformTestPage.aspx'. Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 3) The WCF service runs fine (again, connected as admin) until i restart the server. When i try to run the WCF service after a reboot, the mysql assembly seems to be missing from the solution. If i just rebuilt the solution and run the service again... it works (until next restart). Whats causing this error? Solution here - http://tinypic.com/view.php?pic=5yasqx&s=5 Server Error in '/WCFService' Application. Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied.] System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) +0 System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) +567 System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +192 System.Reflection.Assembly.Load(String assemblyString) +35 System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +243 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/WCFService/Service.svc' cannot be activated due to an exception during compilation. The exception message is: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 Thats about it, hope someone reads this message, I wasted most of the weekend trying to fix these problems on my own... thanks

    Read the article

  • WCF with No security

    - by james.ingham
    Hi all, I've got a WCF service setup which I can consume and use as intendid... but only on the same machine. I'm looking to get this working over multiple computers and I'm not fussed about the security. However when I set (client side) the security to = none, I get a InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. So I'm left with: <security mode="Message"> <message clientCredentialType="None" negotiateServiceCredential="false" algorithmSuite="Default" /> </security> But this gives me another InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. Why would I have to provide a certificate if security was turned off? Server app config: <system.serviceModel> <services> <service name="Server.WcfServiceLibrary.CheckoutService" behaviorConfiguration="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/CheckoutService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.ICheckoutService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> <service name="Server.WcfServiceLibrary.ManagementService" behaviorConfiguration="Server.WcfServiceLibrary.ManagementServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.IManagementService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> <serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="50" maxConcurrentInstances="50" /> </behavior> <behavior name="Server.WcfServiceLibrary.ManagementServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Client app config: <system.serviceModel> <bindings> <wsDualHttpBinding> <binding name="WSDualHttpBinding_IManagementService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:00:10" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" /> <security mode="Message"> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsDualHttpBinding> </bindings> <client> <endpoint address="http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" binding="wsDualHttpBinding" bindingConfiguration="WSDualHttpBinding_IManagementService" contract="ServiceReference.IManagementService" name="WSDualHttpBinding_IManagementService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> Thanks

    Read the article

  • When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my serve

    - by sangfroid
    Hello all, When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my servers, but not all of them. Here is the block of code that's giving me trouble : <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core"%> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%> <c:set var="TEXTVARIABLE">|STRINGOFTEXT|</c:set> <c:set var="OTHERTEXTVARIABLE">${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}</c:set> And here is the exception : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. However, everything works fine if I change the URI for the core declaration to http://java.sun.com/jsp/jstl/core So here's the really weird part : for some reason, mixing 1.0 and 1.1 taglib declarations only causes an exception on two of my servers -- my staging server and my production server. It causes no problems at all on my local machine or my development server. Why is this? What could possibly be causing this difference in behavior? The three servers are extremely similar in setup and configuration. The JSP page is being served up by OpenCMS, and I'm using the Caucho's Resin webserver. I understand that you don't know how my servers or CMS are set up, but really, what I'm looking for is ideas. Any ideas at all would help -- this problem has been driving me absolutely batty. Even if you don't know what could be causing the problem, if you have any suggestions at all for how I could approach the problem, that would be extremely helpful. I just don't understand what could cause this difference in behavior between my servers. For reference, here's the full stack trace : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:369) at org.opencms.jsp.CmsJspTagInclude.includeTagAction(CmsJspTagInclude.java:241) at org.opencms.jsp.CmsJspTagInclude.doEndTag(CmsJspTagInclude.java:472) at _jsp._WEB_22dINF._jsp._online._system._modules.com_MYCOMPANY__marketing._templates._MAIN_0PAGE__jsp._jspService(_MAIN_0PAGE__jsp.java:153) at com.caucho.jsp.JavaPage.service(JavaPage.java:60) at com.caucho.jsp.Page.pageservice(Page.java:579) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:179) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.loader.CmsJspLoader.dispatchJsp(CmsJspLoader.java:1227) at org.opencms.loader.CmsJspLoader.load(CmsJspLoader.java:1171) at org.opencms.loader.A_CmsXmlDocumentLoader.load(A_CmsXmlDocumentLoader.java:232) at org.opencms.loader.CmsXmlContentLoader.load(CmsXmlContentLoader.java:52) at org.opencms.loader.CmsResourceManager.loadResource(CmsResourceManager.java:964) at org.opencms.main.OpenCmsCore.showResource(OpenCmsCore.java:1498) at org.opencms.main.OpenCmsServlet.doGet(OpenCmsServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:115) at javax.servlet.http.HttpServlet.service(HttpServlet.java:92) at com.caucho.server.dispatch.ServletFilterChain.doFilter(ServletFilterChain.java:106) at com.caucho.filters.CmsGzipFilter.doFilter(CmsGzipFilter.java:177) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:277) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:106) at com.caucho.server.dispatch.ForwardFilterChain.doFilter(ForwardFilterChain.java:80) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:207) at com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:173) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:274) at com.caucho.server.port.TcpConnection.run(TcpConnection.java:514) at com.caucho.util.ThreadPool.runTasks(ThreadPool.java:520) at com.caucho.util.ThreadPool.run(ThreadPool.java:442) at java.lang.Thread.run(Thread.java:595) Caused by: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at com.caucho.jsp.java.JspNode.error(JspNode.java:1489) at com.caucho.jsp.java.JspNode.error(JspNode.java:1480) at com.caucho.jsp.java.JavaJspGenerator.validate(JavaJspGenerator.java:466) at com.caucho.jsp.JspCompilerInstance.generate(JspCompilerInstance.java:475) at com.caucho.jsp.JspCompilerInstance.compile(JspCompilerInstance.java:373) at com.caucho.jsp.JspManager.compile(JspManager.java:233) at com.caucho.jsp.JspManager.createPage(JspManager.java:177) at com.caucho.jsp.JspManager.createPage(JspManager.java:157) at com.caucho.jsp.PageManager.getPage(PageManager.java:248) at com.caucho.jsp.PageManager.getPage(PageManager.java:166) at com.caucho.jsp.QServlet.getSubPage(QServlet.java:292) at com.caucho.jsp.QServlet.getPage(QServlet.java:210) at com.caucho.server.dispatch.PageFilterChain.compilePage(PageFilterChain.java:206) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:133) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:364) ... 45 more Thanks for the help!

    Read the article

  • How do I fix a "Request format is unrecognized for URL..." error in a web service running in IIS?

    - by Hary
    I am get the following error while running web service in IIS: Server Error in '/Inbox Sevice' Application. Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'.] System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(Type type, HttpContext context, HttpRequest request, HttpResponse response) +490982 System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath) +104 System.Web.Script.Services.ScriptHandlerFactory.GetHandler(HttpContext context, String requestType, String url, String pathTranslated) +127 System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) +175 System.Web.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +120 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.42; ASP.NET Version:2.0.50727.42 Does anyone know why I am seeing this error and if there is any way to fix it?

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Package Dependencies Error in almost every install

    - by Betaxpression
    New to Ubuntu. In the other sofware sources i have "Debian 4.0 eth" officially supported "non-us.debian.org/"; etc ... "ppa.launcpad.net" and installing applications has stopped working. I think i first came across this problem after installing Blender 2.58 When using update manager it is prompting for a partial upgrade. Almost every software when trying to install showing the same error Package Dependencies Error or GPG PUB KEY missing, tried to fixing to them but no luck. Output to: sudo apt-get update && sudo apt-get upgrade (links disabled http:// -- http:/ as new user can't put more no. of hyperlinks) Ign http:/non-us.debian.org stable/non-US InRelease Ign http:/non-us.debian.org stable/non-US Release.gpg Ign http:/non-us.debian.org stable/non-US Release Ign http:/non-us.debian.org stable/non-US/contrib TranslationIndex Ign http:/non-us.debian.org stable/non-US/main TranslationIndex Ign http:/non-us.debian.org stable/non-US/non-free TranslationIndex Err http:/non-us.debian.org stable/non-US/main Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/main amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free amd64 Packages 503 Service Unavailable Ign http:/non-us.debian.org stable/non-US/contrib Translation-en_IN Ign http:/non-us.debian.org stable/non-US/contrib Translation-en Ign http:/non-us.debian.org stable/non-US/main Translation-en_IN Ign http:/non-us.debian.org stable/non-US/main Translation-en Ign http:/non-us.debian.org stable/non-US/non-free Translation-en_IN Ign http:/non-us.debian.org stable/non-US/non-free Translation-en Ign http:/archive.ubuntu.com natty InRelease Ign http:/archive.canonical.com natty InRelease Ign http:/extras.ubuntu.com natty InRelease Ign http:/http.us.debian.org stable InRelease Ign http:/ftp.us.debian.org etch InRelease Ign http:/archive.ubuntu.com natty-updates InRelease Hit http:/archive.canonical.com natty Release.gpg Get:1 http:/extras.ubuntu.com natty Release.gpg [72 B] Ign http:/ppa.launchpad.net natty InRelease Get:2 http:/http.us.debian.org stable Release.gpg [1,672 B] Ign http:/linux.dropbox.com natty InRelease Ign http:/ftp.us.debian.org etch Release.gpg Ign http:/archive.ubuntu.com natty-security InRelease Hit http:/archive.canonical.com natty Release Hit http:/extras.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Get:3 http:/linux.dropbox.com natty Release.gpg [489 B] Ign http:/ftp.us.debian.org etch Release Ign http:/dl.google.com stable InRelease Get:4 http:/archive.ubuntu.com natty Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner Sources Hit http:/extras.ubuntu.com natty/main Sources Get:5 http:/linux.dropbox.com natty Release [2,599 B] Get:6 http:/archive.ubuntu.com natty-updates Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner amd64 Packages Hit http:/extras.ubuntu.com natty/main amd64 Packages Get:7 http:/linux.dropbox.com natty/main amd64 Packages [784 B] Get:8 http:/archive.ubuntu.com natty-security Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Ign http:/archive.canonical.com natty/partner TranslationIndex Ign http:/extras.ubuntu.com natty/main TranslationIndex Get:9 http:/http.us.debian.org stable Release [104 kB] Ign http:/linux.dropbox.com natty/main TranslationIndex Hit http:/archive.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Ign http:/http.us.debian.org stable Release Hit http:/archive.ubuntu.com natty-updates Release Get:10 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty-security Release Get:11 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty/restricted Sources Get:12 http:/ppa.launchpad.net natty Release.gpg [316 B] Ign http:/http.us.debian.org stable/main Sources/DiffIndex Get:13 http:/ppa.launchpad.net natty Release.gpg [316 B] Hit http:/archive.ubuntu.com natty/main Sources Ign http:/ftp.us.debian.org etch/contrib TranslationIndex Ign http:/http.us.debian.org stable/contrib Sources/DiffIndex Get:14 http:/ppa.launchpad.net natty Release.gpg [1,502 B] Ign http:/http.us.debian.org stable/non-free Sources/DiffIndex Ign http:/ftp.us.debian.org etch/main TranslationIndex Get:15 http:/ppa.launchpad.net natty Release.gpg [1,928 B] Ign http:/http.us.debian.org stable/main amd64 Packages/DiffIndex Ign http:/ftp.us.debian.org etch/non-free TranslationIndex Ign http:/ppa.launchpad.net natty Release.gpg Hit http:/http.us.debian.org stable/contrib amd64 Packages/DiffIndex W: GPG error: http:/http.us.debian.org stable Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY AED4B06F473041FA NO_PUBKEY 64481591B98321F9 W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_sunab_kdenlive-release_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_ubuntu-wine_ppa_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message E: Could not open file /var/lib/apt/lists/http.us.debian.org_debian_dists_stable_contrib_binary-amd64_Packages.IndexDiff - open (2: No such file or directory) output to: sudo cat /etc/apt/sources.list # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http:/help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http:/archive.ubuntu.com/ubuntu natty main restricted deb-src http:/archive.ubuntu.com/ubuntu natty restricted main multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http:/archive.ubuntu.com/ubuntu natty-updates main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-updates restricted main multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http:/archive.ubuntu.com/ubuntu natty universe deb http:/archive.ubuntu.com/ubuntu natty-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http:/archive.ubuntu.com/ubuntu natty multiverse deb http:/archive.ubuntu.com/ubuntu natty-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http:/archive.ubuntu.com/ubuntu natty-security main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-security restricted main multiverse universe #Added by software-properties deb http:/archive.ubuntu.com/ubuntu natty-security universe deb http:/archive.ubuntu.com/ubuntu natty-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http:/archive.canonical.com/ubuntu natty partner deb-src http:/archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http:/extras.ubuntu.com/ubuntu natty main deb-src http:/extras.ubuntu.com/ubuntu natty main deb http:/ftp.us.debian.org/debian/ etch main contrib non-free deb-src http:/ftp.us.debian.org/debian/ etch main contrib non-free deb http:/http.us.debian.org/debian stable main contrib non-free deb-src http:/http.us.debian.org/debian stable main contrib non-free deb http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free deb-src http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free Thanks But after removing Debian repositories still getting this error: W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9BDB3D89CE49EC21, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 80E7349A06ED541C, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C851674F96FD737, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 94E58C34A8670E8C, E:Unable to parse package file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_i18n_Index (1) I actually tried this before, but i am always getting this error --Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 8C851674F96FD737 gpg: requesting key F96FD737 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >