Search Results

Search found 20289 results on 812 pages for 'service locator'.

Page 159/812 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Why don't Domain class static methods work from inside a grails "service"?

    - by ?????
    I want a grails service to be able to access Domain static methods, for queries, etc. For example, in a controller, I can call IncomingCall.count() to get the number of records in table "IncomingCall" but if I try to do this from inside a service, I get the error: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'incomingStatusService': Invocation of init method failed; nested exception is groovy.lang.MissingMethodException: No signature of method: static ms.wdw.tropocontrol.IncomingCall.count() is applicable for argument types: () values: [] How do these methods get injected? There's no magic def statement in a controller that appears to do this. Or is the problem that Hibernate isn't available from my Service class?

    Read the article

  • JBoss as a windows service. Where can i set the JAVA_OPTS?

    - by ikky
    hi. I'm running JBoss as a windows service, but i can't seem to find where i can configure the JAVA_OPTS to make it work properly. I need to set the Xms and the Xmx. I have tried to just run JBoss manually (run.bat) and in the same file i set the JAVA_OPTS= -Xms128m -Xmx512m. And that works. Here is my install.bat where i install the JBoss as a service: set JBOSS_CLASS_PATH=%JAVA_HOME%\lib\tools.jar;%JBOSS_HOME%\bin\run.jar rem copy /Y JavaService.exe D:\PROJECT\bin\JBossService.exe JBossService.exe -install JBoss %JAVA_HOME%\jre\bin\server\jvm.dll -Djava.class.path=%JBOSS_CLASS_PATH% -start org.jboss.Main -stop org.jboss.Shutdown -method systemExit -out %PROJECT_HOME%\log\JBoss_out.log -err %PROJECT_HOME%\log\JBoss_err.log -current D:\PROJECT\bin net start JBoss When i look at the info about JBoss Application Server (http://localhost:8080/web-console/) i see this info: JVM Environment Free Memory: 9 MB Max Memory: 63 MB Total Memory: 63 MB And i MUST have more Max Memory. Does anybody know where i can set the JAVA_OPTS when running JBoss as a service?

    Read the article

  • Can Google App Engine be used for "check for updated" and download binary file web service?

    - by Tofrizer
    Hi, I'm a Google App Engine newbie and would be grateful for any help. I have an iPhone app which sources data from an sqlite db stored localling on the device. I'd like to set up a Google App Engine web service which my iPhone client will talk to and check if there is a newer version of the sqlite database it needs to download. So iPhone client hits the web service with some kind of version number/timestamp and if there is a newer file, the App Engine will notify the client and the client will then request the new database to download which the App Engine will serve. Is it possible to set up a web service in Google App Engine to do this? Could anyone point me to any sample code / tutorials please? Many Thanks

    Read the article

  • How can Excel 2007 / 2010 consume a REST web service?

    - by jallen
    What options exist to consume a REST web service from within Excel 2007 / 2010? I can use XML Maps to consume a basic XML list, but that doesn't let me build a dynamic URL (so I could include parameters). For example, I can add an XML Map to Excel for http://machine/service/level/5 and display the values in the workbook just fine - no problem there. The real question is, how can I dynamically change the /5 part of the URL to come from another cell in excel? That way I can have a couple of cells that have the options (what ID, what name, etc.) and whenever those values change (ideally) a new dynamic URL would be constructed and the XML map would be refreshed. Is such a thing possible? Does anyone else have a better way to take some parameters, call a web service (REST or SOAP, I'm not picky) and shove the results back into excel for further manipulation?

    Read the article

  • How to Software Service Industry get their Client from or where do they look for their Clients ?

    - by Rachel
    This question is inclined more towards Business Side of Software Industry, I am sure that we have two types in Software Industry as with any other Industry, i.e, Service side and Product Side. Basically for Product based company, we are looking more into product features and to see if market if mature or not for particular product but as far as Service Based company goes (may be it can be big giants like Infosys, TCS or Sapient or some midsize or small companies which provide services like:Web Design,Website Design,Corporate Identity,Logo Design,Flash Design,Web Applications,Enterprise Portal,Rich Internet Applications,Business Applications,Technology Consulting,Ecommerce,Online Store Creations,Custom Shopping Carts,Ecommerce Hosting,Website Marketing,Organic SEO,Pay-Per-Click,Social Media Optimization,Mobile,Mobile Website and Mobile Applications) where do they look for Client and how do they manage to get one ? So my basic question is Where do Service Based Companies Look for Client or get their Clients Form ?

    Read the article

  • Workaround for PHP SOAP request failure when wsdl defines service port binding as https and port 80?

    - by scooterhanson
    I am consuming a SOAP web service using php5's soap extension. The service' wsdl was generated using Axis java2wsdl, and whatever options are used during generation result in the port binding url being listed as https://xxx.xxx.xxx.xxx**:80** If I download the wsdl to my server, remove the port 80 specification from the port binding location value, and reference the local file in my soapclient call it works fine. However, if I try to reference it remotely (or download it and reference it locally, as-is) the call fails with a soap fault. I have no input into the service side so I can't make them change their wsdl-generation process. So, unless there's a way to make the soapclient ignorant of the port, I'm stuck with using a locally modified copy of someone else' wsdl (which I'd rather not do). Any thoughts on how to make my soapclient ignore the port 80?

    Read the article

  • What's exactly the nature of .Net Framework 3.5 Service Pack1?

    - by Richard77
    Hello, I did some recovery disk operations with my laptop because it became instable. Now, I'm about to re-install SQL Server 2008 Professional, but it keeps telling me that I need to install .net framework 3.5 with service pack1. What's strange is I was asked to do the same when I installed Visual Studio 2008 Professional earlier. I'd like to know: what's .Net Framework Service Pack1? is it One piece of software sitting on top of windows ? Several software having the same name so that Visual Studio has it, SQL Server also has it, and do on... And why the naming Service Pack1 behind .Net Framework? I'm really lost. Thanks for helping

    Read the article

  • Apache Axis web service clients vs plain SOAP requests.

    - by Andy Pryor
    I'm looking for the best way to consume a Java web service that returns rather large and complex objects. I am currently using Apache Axis clients generated from the wsdl, (using eclipse "generate web service client" tool). We have concerns about performance of this. The service proxy objects are not thread safe, and they are rather heavy to instantiate, 2-3 MB on the JVM. The other alternative is making HTTP connections and building a String SOAP requests. I would have to interpret the response, and build objects from the XML. Would this be a better alternative to the heavy axis objects? I searched for good reading on this, if any one had any links I would greatly appreciate it.

    Read the article

  • Is it possible to configure Apache to host both an ASP.NET Web Service and a PHP Web site?

    - by Eduardo León
    Noob question (because I'm a noob when it comes to Web development). I'm not sure whether I should ask it here or at ServerFault. I am developing an ASP.NET Web Service and a PHP Web site consuming the Web Service. They are meant to be run on different machines. However, only for development purposes, I need to run both on my machine. I cannot use virtual machines. I would like to know if it is possible to configure IIS Apache to host both my Web Service and my Web site? Or, do I need to host the PHP site using Apache? I am using IIS 7.5 Apache HTTP Server 2.2 (NOTE: I have nothing against Apache. In fact, so far I like it more than IIS, however, I would rather not have two Web server applications installed in the same machine.) PHP 5.3.4 .NET Framework 2.0 3.0 or 3.5 (whichever comes with Visual Studio 2008) mod_aspdotnet for Apache 2.2

    Read the article

  • Where best to instantiate and close a Silverlight-enabled WCF Service from the Silverlight app?

    - by Yttrium
    When using a Silverlight-enabled WCF service, where is the best place to instantiate the service and to call the CloseAsync() method? Should you say, instantiate an instance each time you need to make a call to the service, or is it better to just instantiate an instance as a variable of the UserControl that will be making the calls? Then, where is it better to call the CloseAsync method? Should you call it in each of the "someServiceCall_completed" event methods? Or, if created as a variable of the UserControl class, is there a single place to call it? Like a Dispose method, or something equivalent for the UserControl class. Thanks, Jeff

    Read the article

  • Why Flash can't be rendered in a Windows Service?

    - by Leonid
    I'm trying to solve a similar problem as was described here - to create a Windows Service for taking snapshots of rich webpages (html+js+flash) and saving them to a PDF file. The bundle Firefox+cmdlnprint did the trick for me. I wrote a simple program running as a service that invokes Firefox to make a PDF. All seems well, the PDF gets created, but Flash is completely missing. Although, when started not as a service, Flash renders just fine. Can anyone shed a light on what blocks Flash from rendering and if there's a workaround? thanks!

    Read the article

  • Having problem with a crawl service in .net: Server not responding to IP ping. Is it bandwidth or ht

    - by Hamid
    Hi to all I develop web crawling service (windows service / multi-thread) . its work fine, but sometimes my server network not response. and i can't ping server IP (from internet), but can ping by other network card (local ip) that not access to internet. after i open server with remote desktop and stop crawling service. i could ping. What's my problem? Bandwidth limit or max connection limit exceed or ??? how to prevent this issue? Note: when this problem occur, i open browser for browse web site, but can't open any website!!! Could you please help me. Thanks in advanced

    Read the article

  • What happens when a OSGi service using JNI is unregistered while in use?

    - by schngrg
    As I understand, OSGi services can be unregistered anytime, including when they are in use. Consider an OSGi service which internally makes a long-running JNI call. And while that JNI call is executing, the service is unregistered by OSGi. Will the JNI call be allowed to finish or terminated mid-way? What if it was just a normal non-jni long running Java call? Will that call be allowed to finish execution or will OSGi terminate everything immediately and unregister? What is the expected behavior in such a case? Does the expected behavior depend on if the service was loaded using a 'tracker' or not? SG

    Read the article

  • Best way to access a sqlite database file in a web service.

    - by rogernorling
    First question from me on stack overflow. I have created a java web application containing a web service using netbeans (I hope a web application were the correct choice). I use the web application as is with no extra frameworks. This web service use a sqlite JDBC driver for accessing a sqlite database file. My problem is that the file path end up incorrect when I try to form the JDBC connection string. Also, the working directory is different when deploying and when running JUnit tests. I read somewhere about including the file as a resource, but examples of this were nowhere to be seen. In any case, what is the best way to open the sqlite database, both when the web service is deployed and when I test it "localy"? I don't know much about web services, I just need it to work, so please, help me with the technicalities.

    Read the article

  • Do I put ninject in my project that contains my service layer as well?

    - by chobo2
    Hi I have 3 project files(webui,framework that contains service layers and repos and unit tests). Most of my unit testing will be done in the service layer as this will contain all the business logic. So I will be mocking out the repo. In some cases I will query my a repo from my controller(say I just need all users in the database and I am not doing any business logic for it). So I am thinking that I need to have ninject used in my framework project. Where is the best place to put it so I can use them against my repos in my service layers?

    Read the article

  • nagios check_ping: Invalid hostname/address

    - by lgt
    I'm trying to setup nagios for a host what has the following webroot: www.example.com/ui/html/, but nagios won't accept as host this kind of host path check_ping: Invalid hostname/address. Is there a workaround for this issue? # Define a host for the local machine define host{ use linux-server ; Name of host template to use ; This host definition will inherit all variables that are defined ; in (or inherited by) the linux-server host template definition. host_name example.com/ui/html alias example.com/ui/html address www.example.com/ui/html/ } ############################################################################### ############################################################################### # # SERVICE DEFINITIONS # ############################################################################### ############################################################################### # Define a service to check HTTP on the local machine. # Disable notifications for this service by default, as not all users may have HTTP enabled. define service{ use generic-service name http-service service_description HTTP is_volatile 0 check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 1 notifications_enabled 1 notification_interval 0 notification_period 24x7 notification_options c,r check_command check_http!$HOSTADDRESS$ register 0 } Thanks

    Read the article

  • AspNetCompatibility in WCF Services &ndash; easy to trip up

    - by Rick Strahl
    This isn’t the first time I’ve hit this particular wall: I’m creating a WCF REST service for AJAX callbacks and using the WebScriptServiceHostFactory host factory in the service: <%@ ServiceHost Language="C#" Service="WcfAjax.BasicWcfService" CodeBehind="BasicWcfService.cs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>   to avoid all configuration. Because of the Factory that creates the ASP.NET Ajax compatible format via the custom factory implementation I can then remove all of the configuration settings that typically get dumped into the web.config file. However, I do want ASP.NET compatibility so I still leave in: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> </system.serviceModel> in the web.config file. This option allows you access to the HttpContext.Current object to effectively give you access to most of the standard ASP.NET request and response features. This is not recommended as a primary practice but it can be useful in some scenarios and in backwards compatibility scenerios with ASP.NET AJAX Web Services. Now, here’s where things get funky. Assuming you have the setting in web.config, If you now declare a service like this: [ServiceContract(Namespace = "DevConnections")] #if DEBUG [ServiceBehavior(IncludeExceptionDetailInFaults = true)] #endif public class BasicWcfService (or by using an interface that defines the service contract) you’ll find that the service will not work when an AJAX call is made against it. You’ll get a 500 error and a System.ServiceModel.ServiceActivationException System error. Worse even with the IncludeExceptionDetailInFaults enabled you get absolutely no indication from WCF what the problem is. So what’s the problem?  The issue is that once you specify aspNetCompatibilityEnabled=”true” in the configuration you *have to* specify the AspNetCompatibilityRequirements attribute and one of the modes that enables or at least allows for it. You need either Required or Allow: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] without it the service will simply fail without further warning. It will also fail if you set the attribute value to NotAllowed. The following also causes the service to fail as above: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.NotAllowed)] This is not totally unreasonable but it’s a difficult issue to debug especially since the configuration setting is global – if you have more than one service and one requires traditional ASP.NET access and one doesn’t then both must have the attribute specified. This is one reason why you’d want to avoid using this functionality unless absolutely necessary. WCF REST provides some basic access to some of the HTTP features after all, although what’s there is severely limited. I also wish that ServiceActivation errors would provide more error information. Getting an Activation error without further info on what actually is wrong is pretty worthless especially when it is a technicality like a mismatched configuration/attribute setting like this.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  WCF  AJAX  

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Returning null vs Throwing exceptions

    - by Svish
    Is in a bit of disagreement with a more experienced developer on this issue, and was wondering what you guys here think about this. Environment is Java, EJB 3, services, etc. The code I wrote calls a service to get things and to create things. Problem was that I got null pointer exceptions in places that didn't make sense. For example when I asked the service to create an object, I got null back. And when I tried to look up an object with an id I knew existed, I still got null back. Was like it was ignoring me. Spent some time trying to figure out what was wrong in my code (since I'm less experienced I usually assume I have messed up). Turns out the reason was security. If the user principal using my service didn't have the right permissions to use the service I called from my service, then that service simply returned null. The services that are here already are usually not documented either, so this is just something you have to know... somehow... So here is the thing: I mean that this is rather confusing as a developer interacting with this service. To me it would make much more sense if that service thew an exception which would tell me that hey, you don't have the proper permissions to get info about this thing or to create this new thing. I would then immediately know why my service wasn't working as expected. However, he argued that asking is not wrong. Exceptions should only be thrown when there is an error and asking for a thing is not an error. Even if you don't have permission to "see" that the thing you asked for. The things are often looked up in a GUI by users and for those users not having the right permissions, these things simply "do not exist". So, in short: Asking is not wrong, hence no exception. Get methods return null because to those users those things "doesn't exist". Create methods return null because nothing was created, since the user wasn't allowed to create anything. So, what do you guys think? Is this normal and/or good practice? I prefer exceptions as I prefer throwing and catching exceptions because I find it much easier to know what's going on. So I would for example also prefer to throw a NotFoundException if you asked for an id which didn't exist, rather than returning null. Anyways, just curious to what others think about this as I'm not the most experienced developer yet.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • Breadcrumb for multiple categories

    - by Damodar Bashyal
    I post in multiple categories, so is it better to have: Consulting Services Implementation Service A Consulting Services Optimization Service A Consulting Services Upgrade Service A or, Consulting Services Implementation, Optimization, Upgrade Service A I was doing second way, the problem is google doesn't show 3rd set of crumbs. ie it only displays: Consulting Services on search result. But having multiple breadcrumbs on the page doesn't look good. any suugestions? Update For @PatomaS 's question I mean 3 lines of breadcrumbs, see above i have posted same article (Service A) in 3 categories (Implementation, Optimization, Upgrade). So you can reach same article through 3 categories. So whats the best breadcrumb to display on article 'Service A'?

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >