Search Results

Search found 22633 results on 906 pages for 'service accounts'.

Page 237/906 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • SharePoint 3.0 Search Stuck Starting

    - by Skaughty
    I have a single server running SharePoint 3.0 on WS2003 Sp2 not on a domain. My search service stopped working so I just went in to central admin and hit stop, then hit start. I soon found out this is not the way to do this process. I now can not get the search service to start again. I have tried everything I can think of, and nothing works it changes status to Starting but never gets to started. I have to go in with "stsadm -o spsearch -action stop" from the command prompt to stop it. I am using the Local service account for the service and the content accounts. Can anyone help me?

    Read the article

  • I cannot login to my ubuntu admin!!!! HELP!

    - by Spinz01
    I used lightdm to hide the users at the login prompt of ubuntu and it hid my account. The only one visible is "other" and it doesn't even have a known password. So, I have accessed the terminal and created a guest account and another account without passwords and they are not visible. I am new to using the terminal so I don't know how to write the commands for either: making all users/accounts visible to the login and/or making the guest accounts visible, and then once in the GUI change the login settings to see my admin account. I have no idea of what else I could do, or even how to do it. I would appreciate any help, I'm in a desperate situation because I really need to access my desktop! THANK YOU!!! Any ideas are appreciated!!!

    Read the article

  • Google Analytics account setup for multiple personal websites?

    - by User
    I have multiple personal websites that I develop and plan to develop more over time. The number of websites is currently greater than one but less than 50. Currently I have a single Google account with a single analytics account that has a web property for each of my sites. My understanding is that you can have up to 25 analytics accounts attached to a single google account and each of those 25 acccounts can have up to 50 web properties in them which would allow me to track up to 1,250 sites. I don't think I'll be hitting that number anytime soon, however are there other reasons to structure accounts differently, such as using a separate google account for each site and then adding myself as an administrator?

    Read the article

  • Samba login before requesting list of shares?

    - by user69359
    I'm relatively inexperienced at this and am starting to get a little frustrated: I have a Ubuntu Server with several user accounts. The Samba file server hosts a number of public shares + the home directories of each user as a private share. When connecting to the server using my OSX machine, I am prompted to enter my login data, and get a overview of all the public shares + my accounts home directory share. Exactly the way I want it. How ever, when I connect to the server from a Ubuntu machine, I am never prompted to enter any login data, and can only see the public folders. If a user wants to connect to their home directory on the server, they have to go through "connect to server" and mount their specific private share. Is there anyway to configure Samba or the Ubuntu client in a way that will make the user experience more similar to how it is on my OSX box? Thank you for your time!

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • Call Webservice without adding a WebReference - with Complex Types

    - by ck
    I'm using the code at This Site to call a webservice dynamically. [SecurityPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } This works fine when I use built in types, but for my own classes, I get this: Event Type: Error Event Source: TDX Queue Service Event Category: None Event ID: 0 Date: 12/04/2010 Time: 12:12:38 User: N/A Computer: TDXRMISDEV01 Description: System.ArgumentException: Object of type 'TDXDataTypes.AgencyOutput' cannot be converted to type 'AgencyOutput'. Server stack trace: at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.MethodBase.CheckArguments(Object[] parameters, Binder binder, BindingFlags invokeAttr, CultureInfo culture, Signature sig) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at TDXQueueEngine.GenericWebserviceProxy.CallWebService(String webServiceAsmxUrl, String serviceName, String methodName, Object[] args) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\GenericWebserviceProxy.cs:line 76 at TDXQueueEngine.TDXQueueWebserviceItem.Run() in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueueWebserviceItem.cs:line 99 at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at TDXQueueEngine.TDXQueue.RunProcess.EndInvoke(IAsyncResult result) at TDXQueueEngine.TDXQueue.processComplete(IAsyncResult ar) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueue.cs:line 130 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The classes reference the same assembly and the same version. Do I need to include my assembly as a reference when building the temporary assembly? If so, how? Thanks.

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Issue with Silverlight interop with JBoss WebService

    - by Hadi Eskandari
    I have a simple JAXWS webservice deployed in JBoss. It runs fine with a java client but I'm trying to connect using a Silverlight 3.0 application. I've changed the webservice to use Soap 1.1: @BindingType(value = "http://schemas.xmlsoap.org/wsdl/soap/http") public class UserSessionBean implements UserSessionRemote { ... } I'm using BasicHttpBinding on the Silverlight client. There are two issues: 1- When I connect from VisualStudio (2008 and 2010) to create the webservice proxies, the following exception is thrown, but the proxy is generated successfully. This also happens when I try to update the existing web service reference (but it also updates fine). com.sun.xml.ws.server.UnsupportedMediaException: Unsupported Content-Type: application/soap+xml; charset=utf-8 Supported ones are: [text/xml] at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:291) at com.sun.xml.ws.encoding.StreamSOAPCodec.decode(StreamSOAPCodec.java:128) at com.sun.xml.ws.encoding.SOAPBindingCodec.decode(SOAPBindingCodec.java:287) at com.sun.xml.ws.transport.http.HttpAdapter.decodePacket(HttpAdapter.java:276) at com.sun.xml.ws.transport.http.HttpAdapter.access$500(HttpAdapter.java:93) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:432) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:244) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:135) at org.jboss.wsf.stack.metro.RequestHandlerImpl.doPost(RequestHandlerImpl.java:225) at org.jboss.wsf.stack.metro.RequestHandlerImpl.handleHttpRequest(RequestHandlerImpl.java:82) at org.jboss.wsf.common.servlet.AbstractEndpointServlet.service(AbstractEndpointServlet.java:85) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:619) 2- When I use the proxy to fetch some data from the webservice (even methods with primitive types), I get the following error on Silverlight client: "An error occurred while trying to make a request to URI 'http://localhost:9090/admintool/UserSessionEJB'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." Setting a breakpoint on my java code, I can see that it is not hit when I run the silverlight client, so it is probably a cross domain issue, but I'm not sure how to handle it (I've already created a crossdomain.xml file and put it beside my HTML page hosting the silverlight client). I appreciate any help!

    Read the article

  • I finished coding my program. What's next? what are the steps I should take that would enable me to

    - by Luay
    Hi, I finished developing a program and would like to sell it online. However, I am not really sure of what to do next. Here is my current plan (in-order): 1- Add a 'deployment' project (i'm using visual studio) to my project so I can create a setup file for my program. 2- Use visual studio 'testing' add ons to test the program. I have no idea how to do it, but will teach myself. 3- When all is done, install the program on my wife's, parents and in-laws computer to further test it under different environments. 3- Setup a small Ltd. company. ( I might start this earlier as it might take a few weeks to complete). 4- Build / finish building a website (actually I started on this step a few weeks ago but haven't devoted enough time for it to complete yet). 5- Find a software / service to protect my program from piracy. I know it is impossible to protect the program from piracy. I understand that fully. But I still want to implement some sort of solution that wouldn't harm or deter honest customers. 6- Set up a business paypal account. 7- Find a software / service to handle payments on my website 8- Find a software / service to handle issuing license codes 9- Set up all of the above and go 'live'. However, I have zero experience in this sort of thing and would like your guidance on some points. 1- Is it necessary to setup a company? Can I sell as an individual? Will selling as an individual deter persons or companies from buying my software? 2- What is a decent choice for as a software / service to protect my program from my piracy. I have done a quick search and found something called Quick License Manager by Interactive Studios. Is this the sort of thing I am looking for? Is it any good? At which stage do I use or implement their service (or any similar service you might point me too? I guess I am really asking: how does this work? Would they give me a file I just add to my project, rebuild it and that's it? 3- What should I implement to handle payments online and how complicated is it? could you point me to any such services, please? 4- What should I implement to handle issuing license codes and how will that software / service coordinate with the software / service that handles the anti-piracy stuff? could you point me to such services, please? 5- In a different Stack Overflow question (by another user) someone suggested a cms system called Software Droid (http://www.softwaredroid.com). Is this what I am after. Will it help? 6- If the steps i'm following are incorrect in order or logic could you please advise me on what I should actually do? I guess these are question for those of you who have 'been there and done that'. So any guidance will be vary much appreciated. Many thanks in advance for your help.

    Read the article

  • Gridview delete/edit not working when using select parameter

    - by Brian Carroll
    new to ASP.NET. I created a sqldatasource and set up basic select query (SELECT * FROM Accounts) using the wizard. I then had the sqldatasource wizard create the INSERT, EDIT and DELETE queries. Connected this datasource to a gridview with EDITING and DELETING enabled. Everything works fine. The SELECT query returns all records and I can edit/delete them. Now I need to send a parameter to the SELECT command to filter the records to those with the user's id (pulled from Membership.GetUser). When I add this parameter, the SELECT command works fine, but the EDIT/DELETE buttons in the gridview no longer work. No error is generated. The page refreshes but the records were not updated in the database. I don't understand what is wrong. CODE: <% Dim u As MembershipUser Dim userid As String u = Membership.GetUser(User.Identity.Name) userid = u.ProviderUserKey.ToString SqlDataSource1.SelectParameters("UserId").DefaultValue = userid %> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="ID" DataSourceID="SqlDataSource1"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False" ReadOnly="True" SortExpression="ID" /> <asp:BoundField DataField="UserId" HeaderText="UserId" SortExpression="UserId" /> <asp:BoundField DataField="AccountName" HeaderText="AccountName" SortExpression="AccountName" /> <asp:BoundField DataField="DateAdded" HeaderText="DateAdded" SortExpression="DateAdded" /> <asp:BoundField DataField="LastModified" HeaderText="LastModified" SortExpression="LastModified" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:CheckingConnectionString %>" DeleteCommand="DELETE FROM [Accounts] WHERE [ID] = @ID" InsertCommand="INSERT INTO [Accounts] ([UserId], [AccountName], [DateAdded], [LastModified]) VALUES (@UserId, @AccountName, @DateAdded, @LastModified)" SelectCommand="SELECT * FROM [Accounts] WHERE [UserId] = @UserId" UpdateCommand="UPDATE [Accounts] SET [UserId] = @UserId, [AccountName] = @AccountName, [DateAdded] = @DateAdded, [LastModified] = @LastModified WHERE [ID] = @ID"> <DeleteParameters> <asp:Parameter Name="ID" Type="Int32" /> </DeleteParameters> <InsertParameters> <asp:Parameter Name="UserId" Type="String" /> <asp:Parameter Name="AccountName" Type="String" /> <asp:Parameter Name="DateAdded" Type="DateTime" /> <asp:Parameter Name="LastModified" Type="DateTime" /> </InsertParameters> <UpdateParameters> <asp:Parameter Name="UserId" Type="String" /> <asp:Parameter Name="AccountName" Type="String" /> <asp:Parameter Name="DateAdded" Type="DateTime" /> <asp:Parameter Name="LastModified" Type="DateTime" /> <asp:Parameter Name="ID" Type="Int32" /> </UpdateParameters> <SelectParameters> <asp:Parameter Name="UserId"/> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • Symfony2: validate an object that is not an entity

    - by Marronsuisse
    I am using CraueFormFlowBundle to have a multiple page form, and am trying to do some validation on some of the fields but can't figure out how to do this. The object that needs to be validated isn't an Entity, which is causing me trouble. I tried adding a collectionConstraint in the getDefaultOption function of my form type class, but this doesn't work as I get the "Expected argument of type array or Traversable and ArrayAccess" error. I tried with annotations in my object class, but they don't seem to be taken into account. Are annotations taken into account if the class isn't an entity? (i set enable_annotations to true) Anyway, what is the proper way to do this? Basically, I just want to validate that "age" is an integer... class PoemDataCollectorFormType extends AbstractType { public function buildForm(FormBuilder $builder, array $options) { switch ($options['flowStep']) { case 6: $builder->add('msgCategory', 'hidden', array( )); $builder->add('msgFIB','text', array( 'required' => false, )); $builder->add('age', 'integer', array( 'required' => false, )); break; } } public function getDefaultOptions(array $options) { $options = parent::getDefaultOptions($options); $options['flowStep'] = 1; $options['data_class'] = 'YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector'; $options['intention'] = 'my_secret_key'; return $options; } } EDIT: add code, handle validation with annotations As Cyprian, I was pretty sure that using annotations should work, however it doesn't... Here is how I try: In my Controller: public function collectPoemDataAction() { $collector = $this->get('yop.poem.datacollector'); $flow = $this->get('yop.form.flow.poemDataCollector'); $flow->bind($collector); $form = $flow->createForm($collector); if ($flow->isValid($form)) { .... } } In my PoemDataCollector class, which is my data class (service yop.poem.datacollector): class PoemDataCollector { /** * @Assert\Type(type="integer", message="Age should be a number") */ private $age; } EDIT2: Here is the services implementation: The data class (PoemDataCollector) seems to be linked to the flow class and not to the form.. Is that why there is no validation? <service id="yop.poem.datacollector" class="YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector"> </service> <service id="yop.form.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\Type\PoemDataCollectorFormType"> <tag name="form.type" alias="poemDataCollector" /> </service> <service id="yop.form.flow.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\PoemDataCollectorFlow" parent="craue.form.flow" scope="request"> <call method="setFormType"> <argument type="service" id="yop.form.poemDataCollector" /> </call> </service> How can I do the validation while respecting the craueFormFlowBundle guidelines? The guidelines state: Validation groups To validate the form data class a step-based validation group is passed to the form type. By default, if getName() of the form type returns registerUser, such a group is named flow_registerUser_step1 for the first step. Where should I state my constraint to use those validation groups..? I tried: YOP\YourOwnPoetBundle\PoemBuilder\Form\Type\PoemDataCollectorFormType: properties: name: - MinLength: { limit: 5, message: "Your name must have at least {{ limit }} characters.", groups: [flow_poemDataCollector_step1] } sex: - Type: type: integer message: Please input a number groups: [flow_poemDataCollector_step6] But it is not taken into acount.

    Read the article

  • WCF/REST Get image into picturebox?

    - by Garrith
    So I have wcf rest service which succesfuly runs from a console app, if I navigate to: http://localhost:8000/Service/picture/300/400 my image is displayed note the 300/400 sets the width and height of the image within the body of the html page. The code looks like this: namespace WcfServiceLibrary1 { [ServiceContract] public interface IReceiveData { [OperationContract] [WebInvoke(Method = "GET", BodyStyle = WebMessageBodyStyle.Wrapped, ResponseFormat = WebMessageFormat.Xml, UriTemplate = "picture/{width}/{height}")] Stream GetImage(string width, string height); } public class RawDataService : IReceiveData { public Stream GetImage(string width, string height) { int w, h; if (!Int32.TryParse(width, out w)) { w = 640; } // Handle error if (!Int32.TryParse(height, out h)) { h = 400; } Bitmap bitmap = new Bitmap(w, h); for (int i = 0; i < bitmap.Width; i++) { for (int j = 0; j < bitmap.Height; j++) { bitmap.SetPixel(i, j, (Math.Abs(i - j) < 2) ? Color.Blue : Color.Yellow); } } MemoryStream ms = new MemoryStream(); bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); ms.Position = 0; WebOperationContext.Current.OutgoingResponse.ContentType = "image/jpeg"; return ms; } } } What I want to do now is use a client application "my windows form app" and add that image into a picturebox. Im abit stuck as to how this can be achieved as I would like the width and height of the image from my wcf rest service to be set by the width and height of the picturebox. I have tryed this but on two of the lines have errors and im not even sure if it will work as the code for my wcf rest service seperates width and height with a "/" if you notice in the url. string uri = "http://localhost:8080/Service/picture"; private void button1_Click(object sender, EventArgs e) { StringBuilder sb = new StringBuilder(); sb.AppendLine("<picture>"); sb.AppendLine("<width>" + pictureBox1.Image.Width + "</width>"); // the url looks like this http://localhost:8080/Service/picture/300/400 when accessing the image so I am trying to set this here sb.AppendLine("<height>" + pictureBox1.Image.Height + "</height>"); sb.AppendLine("</picture>"); string picture = sb.ToString(); byte[] getimage = Encoding.UTF8.GetBytes(picture); // not sure this is right HttpWebRequest req = WebRequest.Create(uri); //cant convert webrequest to httpwebrequest req.Method = "GET"; req.ContentType = "image/jpg"; req.ContentLength = getimage.Length; MemoryStream reqStrm = req.GetRequestStream(); //cant convert IO stream to IO Memory stream reqStrm.Write(getimage, 0, getimage.Length); reqStrm.Close(); HttpWebResponse resp = req.GetResponse(); // cant convert web respone to httpwebresponse MessageBox.Show(resp.StatusDescription); pictureBox1.Image = Image.FromStream(reqStrm); reqStrm.Close(); resp.Close(); } So just wondering if some one could help me out with this futile attempt at adding a variable image size from my rest service to a picture box on button click. This is the host app aswell: namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string baseAddress = "http://" + Environment.MachineName + ":8000/Service"; ServiceHost host = new ServiceHost(typeof(RawDataService), new Uri(baseAddress)); host.AddServiceEndpoint(typeof(IReceiveData), new WebHttpBinding(), "").Behaviors.Add(new WebHttpBehavior()); host.Open(); Console.WriteLine("Host opened"); Console.ReadLine();

    Read the article

  • IIS7 + WCF + Silverlight problems

    - by Eanna
    Hey, I've been building a silverlight application and a WCF service for a while now and recently tried to host them in IIS7. I installed IIS7 on Windows Server 2008 R2 and added these two application to my default website. I am having a number of problems so im hoping one of you can help out... 1) The silverlight and WCF service applications do not work with pass-through authentication. I need to "connect as" the administrator server account when setting up the application. I read online that you should only need to use the "connect as" field when you are connecting to another computer. If i dont supply the admin credentials i get this error. Do i have to set up permissions somewhere else? HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \?\C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\web.config Requested URL http:://localhost:80/WCFService/Service.svc Physical Path C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\WCFService\Website\Service.svc Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: Links and More Information This error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. 2) Visual studio generated 2 webpages to run my silverlight application (.html and .aspx). When I am running the silverlight application (connected as admin) I can navigate to the .html page, no problem. When I try to open the .aspx file i get the following error Server Error in '/Platform' Application. Access is denied. Description: An error occurred while accessing the resources required to serve this request. You might not have permission to view the requested resources. Error message 401.3: You do not have permission to view this directory or page using the credentials you supplied (access denied due to Access Control Lists). Ask the Web server's administrator to give you access to 'C:\Users\Administrator\Documents\My Dropbox\Research Masters\Project\Platform\Website\PlatformTestPage.aspx'. Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 3) The WCF service runs fine (again, connected as admin) until i restart the server. When i try to run the WCF service after a reboot, the mysql assembly seems to be missing from the solution. If i just rebuilt the solution and run the service again... it works (until next restart). Whats causing this error? Solution here - http://tinypic.com/view.php?pic=5yasqx&s=5 Server Error in '/WCFService' Application. Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [FileLoadException: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied.] System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) +0 System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) +567 System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +192 System.Reflection.Assembly.Load(String assemblyString) +35 System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +243 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/WCFService/Service.svc' cannot be activated due to an exception during compilation. The exception message is: Could not load file or assembly 'MySql.Data, Version=6.2.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. Access is denied..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 Thats about it, hope someone reads this message, I wasted most of the weekend trying to fix these problems on my own... thanks

    Read the article

  • WCF with No security

    - by james.ingham
    Hi all, I've got a WCF service setup which I can consume and use as intendid... but only on the same machine. I'm looking to get this working over multiple computers and I'm not fussed about the security. However when I set (client side) the security to = none, I get a InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. So I'm left with: <security mode="Message"> <message clientCredentialType="None" negotiateServiceCredential="false" algorithmSuite="Default" /> </security> But this gives me another InvalidOperationException: The service certificate is not provided for target 'http://xxx.xxx.xxx.xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/'. Specify a service certificate in ClientCredentials. Why would I have to provide a certificate if security was turned off? Server app config: <system.serviceModel> <services> <service name="Server.WcfServiceLibrary.CheckoutService" behaviorConfiguration="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/CheckoutService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.ICheckoutService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> <service name="Server.WcfServiceLibrary.ManagementService" behaviorConfiguration="Server.WcfServiceLibrary.ManagementServiceBehavior"> <host> <baseAddresses> <add baseAddress = "http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" /> </baseAddresses> </host> <endpoint address ="" binding="wsDualHttpBinding" contract="Server.WcfServiceLibrary.IManagementService"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Server.WcfServiceLibrary.CheckoutServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> <serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="50" maxConcurrentInstances="50" /> </behavior> <behavior name="Server.WcfServiceLibrary.ManagementServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Client app config: <system.serviceModel> <bindings> <wsDualHttpBinding> <binding name="WSDualHttpBinding_IManagementService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:00:10" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" /> <security mode="Message"> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsDualHttpBinding> </bindings> <client> <endpoint address="http://xxx:8731/Design_Time_Addresses/WcfServiceLibrary/ManagementService/" binding="wsDualHttpBinding" bindingConfiguration="WSDualHttpBinding_IManagementService" contract="ServiceReference.IManagementService" name="WSDualHttpBinding_IManagementService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> Thanks

    Read the article

  • When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my serve

    - by sangfroid
    Hello all, When I mix JSTL 1.0 and JSTL 1.1 taglib declarations, it causes a ParseException on some of my servers, but not all of them. Here is the block of code that's giving me trouble : <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core"%> <%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%> <c:set var="TEXTVARIABLE">|STRINGOFTEXT|</c:set> <c:set var="OTHERTEXTVARIABLE">${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}</c:set> And here is the exception : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. However, everything works fine if I change the URI for the core declaration to http://java.sun.com/jsp/jstl/core So here's the really weird part : for some reason, mixing 1.0 and 1.1 taglib declarations only causes an exception on two of my servers -- my staging server and my production server. It causes no problems at all on my local machine or my development server. Why is this? What could possibly be causing this difference in behavior? The three servers are extremely similar in setup and configuration. The JSP page is being served up by OpenCMS, and I'm using the Caucho's Resin webserver. I understand that you don't know how my servers or CMS are set up, but really, what I'm looking for is ideas. Any ideas at all would help -- this problem has been driving me absolutely batty. Even if you don't know what could be causing the problem, if you have any suggestions at all for how I could approach the problem, that would be extremely helpful. I just don't understand what could cause this difference in behavior between my servers. For reference, here's the full stack trace : javax.servlet.jsp.JspException: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:369) at org.opencms.jsp.CmsJspTagInclude.includeTagAction(CmsJspTagInclude.java:241) at org.opencms.jsp.CmsJspTagInclude.doEndTag(CmsJspTagInclude.java:472) at _jsp._WEB_22dINF._jsp._online._system._modules.com_MYCOMPANY__marketing._templates._MAIN_0PAGE__jsp._jspService(_MAIN_0PAGE__jsp.java:153) at com.caucho.jsp.JavaPage.service(JavaPage.java:60) at com.caucho.jsp.Page.pageservice(Page.java:579) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:179) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.loader.CmsJspLoader.dispatchJsp(CmsJspLoader.java:1227) at org.opencms.loader.CmsJspLoader.load(CmsJspLoader.java:1171) at org.opencms.loader.A_CmsXmlDocumentLoader.load(A_CmsXmlDocumentLoader.java:232) at org.opencms.loader.CmsXmlContentLoader.load(CmsXmlContentLoader.java:52) at org.opencms.loader.CmsResourceManager.loadResource(CmsResourceManager.java:964) at org.opencms.main.OpenCmsCore.showResource(OpenCmsCore.java:1498) at org.opencms.main.OpenCmsServlet.doGet(OpenCmsServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:115) at javax.servlet.http.HttpServlet.service(HttpServlet.java:92) at com.caucho.server.dispatch.ServletFilterChain.doFilter(ServletFilterChain.java:106) at com.caucho.filters.CmsGzipFilter.doFilter(CmsGzipFilter.java:177) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:277) at com.caucho.server.webapp.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:106) at com.caucho.server.dispatch.ForwardFilterChain.doFilter(ForwardFilterChain.java:80) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:207) at com.caucho.server.webapp.WebAppFilterChain.doFilter(WebAppFilterChain.java:173) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.http.HttpRequest.handleRequest(HttpRequest.java:274) at com.caucho.server.port.TcpConnection.run(TcpConnection.java:514) at com.caucho.util.ThreadPool.runTasks(ThreadPool.java:520) at com.caucho.util.ThreadPool.run(ThreadPool.java:442) at java.lang.Thread.run(Thread.java:595) Caused by: com.caucho.jsp.JspLineParseException: /WEB-INF/jsp/online/system/modules/com.MYCOMPANY.marketing/templates/common/MY_JSP_PAGE.jsp:1: tag = 'out' / attribute = 'value': An error occurred while parsing custom action attribute "value" with value "${fn:contains(TEXTVARIABLE, '|STRINGOFTEXT|')}": org.apache.taglibs.standard.lang.jstl.parser.ParseException: EL functions are not supported. at com.caucho.jsp.java.JspNode.error(JspNode.java:1489) at com.caucho.jsp.java.JspNode.error(JspNode.java:1480) at com.caucho.jsp.java.JavaJspGenerator.validate(JavaJspGenerator.java:466) at com.caucho.jsp.JspCompilerInstance.generate(JspCompilerInstance.java:475) at com.caucho.jsp.JspCompilerInstance.compile(JspCompilerInstance.java:373) at com.caucho.jsp.JspManager.compile(JspManager.java:233) at com.caucho.jsp.JspManager.createPage(JspManager.java:177) at com.caucho.jsp.JspManager.createPage(JspManager.java:157) at com.caucho.jsp.PageManager.getPage(PageManager.java:248) at com.caucho.jsp.PageManager.getPage(PageManager.java:166) at com.caucho.jsp.QServlet.getSubPage(QServlet.java:292) at com.caucho.jsp.QServlet.getPage(QServlet.java:210) at com.caucho.server.dispatch.PageFilterChain.compilePage(PageFilterChain.java:206) at com.caucho.server.dispatch.PageFilterChain.doFilter(PageFilterChain.java:133) at shared.filter.RemoteAddrFilterBase.doFilter(RemoteAddrFilterBase.java:57) at com.caucho.server.dispatch.FilterFilterChain.doFilter(FilterFilterChain.java:70) at com.caucho.server.webapp.DispatchFilterChain.doFilter(DispatchFilterChain.java:115) at com.caucho.server.cache.CacheFilterChain.doFilter(CacheFilterChain.java:175) at com.caucho.server.dispatch.ServletInvocation.service(ServletInvocation.java:229) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:485) at com.caucho.server.webapp.RequestDispatcherImpl.include(RequestDispatcherImpl.java:350) at org.opencms.flex.CmsFlexRequestDispatcher.includeExternal(CmsFlexRequestDispatcher.java:194) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:169) at org.opencms.loader.CmsJspLoader.service(CmsJspLoader.java:1193) at org.opencms.flex.CmsFlexRequestDispatcher.includeInternalWithCache(CmsFlexRequestDispatcher.java:423) at org.opencms.flex.CmsFlexRequestDispatcher.include(CmsFlexRequestDispatcher.java:173) at org.opencms.jsp.CmsJspTagInclude.includeActionWithCache(CmsJspTagInclude.java:364) ... 45 more Thanks for the help!

    Read the article

  • How do I fix a "Request format is unrecognized for URL..." error in a web service running in IIS?

    - by Hary
    I am get the following error while running web service in IIS: Server Error in '/Inbox Sevice' Application. Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Request format is unrecognized for URL unexpectedly ending in '/GetMailsInfo'.] System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(Type type, HttpContext context, HttpRequest request, HttpResponse response) +490982 System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath) +104 System.Web.Script.Services.ScriptHandlerFactory.GetHandler(HttpContext context, String requestType, String url, String pathTranslated) +127 System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) +175 System.Web.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +120 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.42; ASP.NET Version:2.0.50727.42 Does anyone know why I am seeing this error and if there is any way to fix it?

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Package Dependencies Error in almost every install

    - by Betaxpression
    New to Ubuntu. In the other sofware sources i have "Debian 4.0 eth" officially supported "non-us.debian.org/"; etc ... "ppa.launcpad.net" and installing applications has stopped working. I think i first came across this problem after installing Blender 2.58 When using update manager it is prompting for a partial upgrade. Almost every software when trying to install showing the same error Package Dependencies Error or GPG PUB KEY missing, tried to fixing to them but no luck. Output to: sudo apt-get update && sudo apt-get upgrade (links disabled http:// -- http:/ as new user can't put more no. of hyperlinks) Ign http:/non-us.debian.org stable/non-US InRelease Ign http:/non-us.debian.org stable/non-US Release.gpg Ign http:/non-us.debian.org stable/non-US Release Ign http:/non-us.debian.org stable/non-US/contrib TranslationIndex Ign http:/non-us.debian.org stable/non-US/main TranslationIndex Ign http:/non-us.debian.org stable/non-US/non-free TranslationIndex Err http:/non-us.debian.org stable/non-US/main Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/main amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free amd64 Packages 503 Service Unavailable Ign http:/non-us.debian.org stable/non-US/contrib Translation-en_IN Ign http:/non-us.debian.org stable/non-US/contrib Translation-en Ign http:/non-us.debian.org stable/non-US/main Translation-en_IN Ign http:/non-us.debian.org stable/non-US/main Translation-en Ign http:/non-us.debian.org stable/non-US/non-free Translation-en_IN Ign http:/non-us.debian.org stable/non-US/non-free Translation-en Ign http:/archive.ubuntu.com natty InRelease Ign http:/archive.canonical.com natty InRelease Ign http:/extras.ubuntu.com natty InRelease Ign http:/http.us.debian.org stable InRelease Ign http:/ftp.us.debian.org etch InRelease Ign http:/archive.ubuntu.com natty-updates InRelease Hit http:/archive.canonical.com natty Release.gpg Get:1 http:/extras.ubuntu.com natty Release.gpg [72 B] Ign http:/ppa.launchpad.net natty InRelease Get:2 http:/http.us.debian.org stable Release.gpg [1,672 B] Ign http:/linux.dropbox.com natty InRelease Ign http:/ftp.us.debian.org etch Release.gpg Ign http:/archive.ubuntu.com natty-security InRelease Hit http:/archive.canonical.com natty Release Hit http:/extras.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Get:3 http:/linux.dropbox.com natty Release.gpg [489 B] Ign http:/ftp.us.debian.org etch Release Ign http:/dl.google.com stable InRelease Get:4 http:/archive.ubuntu.com natty Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner Sources Hit http:/extras.ubuntu.com natty/main Sources Get:5 http:/linux.dropbox.com natty Release [2,599 B] Get:6 http:/archive.ubuntu.com natty-updates Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner amd64 Packages Hit http:/extras.ubuntu.com natty/main amd64 Packages Get:7 http:/linux.dropbox.com natty/main amd64 Packages [784 B] Get:8 http:/archive.ubuntu.com natty-security Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Ign http:/archive.canonical.com natty/partner TranslationIndex Ign http:/extras.ubuntu.com natty/main TranslationIndex Get:9 http:/http.us.debian.org stable Release [104 kB] Ign http:/linux.dropbox.com natty/main TranslationIndex Hit http:/archive.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Ign http:/http.us.debian.org stable Release Hit http:/archive.ubuntu.com natty-updates Release Get:10 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty-security Release Get:11 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty/restricted Sources Get:12 http:/ppa.launchpad.net natty Release.gpg [316 B] Ign http:/http.us.debian.org stable/main Sources/DiffIndex Get:13 http:/ppa.launchpad.net natty Release.gpg [316 B] Hit http:/archive.ubuntu.com natty/main Sources Ign http:/ftp.us.debian.org etch/contrib TranslationIndex Ign http:/http.us.debian.org stable/contrib Sources/DiffIndex Get:14 http:/ppa.launchpad.net natty Release.gpg [1,502 B] Ign http:/http.us.debian.org stable/non-free Sources/DiffIndex Ign http:/ftp.us.debian.org etch/main TranslationIndex Get:15 http:/ppa.launchpad.net natty Release.gpg [1,928 B] Ign http:/http.us.debian.org stable/main amd64 Packages/DiffIndex Ign http:/ftp.us.debian.org etch/non-free TranslationIndex Ign http:/ppa.launchpad.net natty Release.gpg Hit http:/http.us.debian.org stable/contrib amd64 Packages/DiffIndex W: GPG error: http:/http.us.debian.org stable Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY AED4B06F473041FA NO_PUBKEY 64481591B98321F9 W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_sunab_kdenlive-release_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_ubuntu-wine_ppa_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message E: Could not open file /var/lib/apt/lists/http.us.debian.org_debian_dists_stable_contrib_binary-amd64_Packages.IndexDiff - open (2: No such file or directory) output to: sudo cat /etc/apt/sources.list # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http:/help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http:/archive.ubuntu.com/ubuntu natty main restricted deb-src http:/archive.ubuntu.com/ubuntu natty restricted main multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http:/archive.ubuntu.com/ubuntu natty-updates main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-updates restricted main multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http:/archive.ubuntu.com/ubuntu natty universe deb http:/archive.ubuntu.com/ubuntu natty-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http:/archive.ubuntu.com/ubuntu natty multiverse deb http:/archive.ubuntu.com/ubuntu natty-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http:/archive.ubuntu.com/ubuntu natty-security main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-security restricted main multiverse universe #Added by software-properties deb http:/archive.ubuntu.com/ubuntu natty-security universe deb http:/archive.ubuntu.com/ubuntu natty-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http:/archive.canonical.com/ubuntu natty partner deb-src http:/archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http:/extras.ubuntu.com/ubuntu natty main deb-src http:/extras.ubuntu.com/ubuntu natty main deb http:/ftp.us.debian.org/debian/ etch main contrib non-free deb-src http:/ftp.us.debian.org/debian/ etch main contrib non-free deb http:/http.us.debian.org/debian stable main contrib non-free deb-src http:/http.us.debian.org/debian stable main contrib non-free deb http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free deb-src http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free Thanks But after removing Debian repositories still getting this error: W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9BDB3D89CE49EC21, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 80E7349A06ED541C, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C851674F96FD737, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 94E58C34A8670E8C, E:Unable to parse package file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_i18n_Index (1) I actually tried this before, but i am always getting this error --Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 8C851674F96FD737 gpg: requesting key F96FD737 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >