Search Results

Search found 71736 results on 2870 pages for 'how to create an access d'.

Page 446/2870 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • How to pass XML to DB using XMLTYPE

    - by James Taylor
    Probably not a common use case but I have seen it pop up from time to time. The question how do I pass XML from a queue or web service and insert it into a DB table using XMLTYPE.In this example I create a basic table with the field PAYLOAD of type XMLTYPE. I then take the full XML payload of the web service and insert it into that database for auditing purposes.I use SOA Suite 11.1.1.2 using composite and mediator to link the web service with the DB adapter.1. Insert Database Objects Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create XML_EXAMPLE_TBL Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE TABLE XML_EXAMPLE_TBL (PAYLOAD XMLTYPE); Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create procedure LOAD_TEST_XML Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE or REPLACE PROCEDURE load_test_xml (xmlFile in CLOB) IS   BEGIN     INSERT INTO xml_example_tbl (payload) VALUES (XMLTYPE(xmlFile));   --Handle the exceptions EXCEPTION   WHEN OTHERS THEN     raise_application_error(-20101, 'Exception occurred in loadPurchaseOrder procedure :'||SQLERRM || ' **** ' || xmlFile ); END load_test_xml; / 2. Creating New SOA Project TestXMLTYPE in JDeveloperIn JDeveloper either create a new Application or open an existing Application you want to put this work.Under File -> New -> SOA Tier -> SOA Project   Provide a name for the Project, e.g. TestXMLType Choose Empty Composite When selected Empty Composite click Finish.3. Create Database Connection to Stored ProcedureA Blank composite will be displayed. From the Component Palette drag a Database Adapter to the  External References panel. and configure the Database Adapter Wizard to connect to the DB procedure created above.Provide a service name InsertXML Select a Database connection where you installed the table and procedure above. If it doesn't exist create a new one. Select Call a Stored Procedure or Function then click NextChoose the schema you installed your Procedure in step 1 and query for the LOAD_TEST_XML procedure.Click Next for the remaining screens until you get to the end, then click Finish to complete the database adapter wizard.4. Create the Web Service InterfaceDownload this sample schema that will be used as the input for the web service. It does not matter what schema you use this solution will work with any. Feel free to use your own if required. singleString.xsd Drag from the component palette the Web Service to the Exposed Services panel on the component.Provide a name InvokeXMLLoad for the service, and click the cog icon.Click the magnify glass for the URL to browse to the location where you downloaded the xml schema above.  Import the schema file by selecting the import schema iconBrowse to the location to where you downloaded the singleString.xsd above.Click OK for the Import Schema File, then select the singleString node of the imported schema.Accept all the defaults until you get back to the Web Service wizard screen. The click OK. This step has created a WSDL based on the schema we downloaded earlier.Your composite should now look something like this now.5. Create the Mediator Routing Rules Drag a Mediator component into the middle of the Composite called ComponentsGive the name of Route, and accept the defaultsLink the services up to the Mediator by connecting the reference points so your Composite looks like this.6. Perform Translations between Web Service and the Database Adapter.From the Composite double click the Route Mediator to show the Map Plan. Select the transformation icon to create the XSLT translation file.Choose Create New Mapper File and accept the defaults.From the Component Palette drag the get-content-as-string component into the middle of the translation file.Your translation file should look something like thisNow we need to map the root element of the source 'singleString' to the XMLTYPE of the database adapter, applying the function get-content-as-string.To do this drag the element singleString to the left side of the function get-content-as-string and drag the right side of the get-content-as-string to the XMLFILE element of the database adapter so the mapping looks like this. You have now completed the SOA Component you can now save your work, deploy and test.When you deploy I have assumed that you have the correct database configurations in the WebLogic Console based on the connection you setup connecting to the Stored Procedure. 7. Testing the ApplicationOpen Enterprise Manager and navigate to the TestXMLTYPE Composite and click the Test button. Load some dummy variables in the Input Arguments and click the 'Test Web Service' buttonOnce completed you can run a SQL statement to check the install. In this instance I have just used JDeveloper and opened a SQL WorksheetSQL Statement Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} select * from xml_example_tbl; Result, you should see the full payload in the result.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Chock-full of Identity Customers at Oracle OpenWorld

    - by Tanu Sood
      Oracle Openworld (OOW) 2012 kicks off this coming Sunday. Oracle OpenWorld is known to bring in Oracle customers, organizations big and small, from all over the world. And, Identity Management is no exception. If you are looking to catch up with Oracle Identity Management customers, hear first-hand about their implementation experiences and discuss industry trends, business drivers, solutions and more at OOW, here are some sessions we recommend you attend: Monday, October 1, 2012 CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Subject matter experts from Kaiser Permanente and SuperValu share the stage with Amit Jasuja, Snior Vice President, Oracle Identity Management and Security to discuss how the latest advances in Identity Management are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. Tuesday, October 2, 2012 CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 With more than 5 billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 As organizations seek to encourage more and more user self service, business users are now primary end users for identity management installations.  Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. Wednesday, October 3, 2012 CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? How do you externalize such entitlements to allow dynamic changes without having to touch the application code? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. Thursday, October 4, 2012 CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. And here’s the complete listing of all the Identity Management sessions at Oracle OpenWorld.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Server Security

    - by pinaldave
    This blog post is inspired from Beginning SQL Joes 2 Pros: The SQL Hands-On Guide for Beginners – SQL Exam Prep Series 70-433 – Volume 1. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Server Security – A Primer. In the article we discussed various basics terminology of the security. The article further covers following important concepts of security. Granting Permissions Denying Permissions Revoking Permissions Above three are the most important concepts related to security and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) If you granted Phil control to the server, but denied his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 2) If you granted Phil control to the server and revoked his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 3) You have a login named James who has Control Server permission. You want to elimintate his ability to create databases without affecting any other permissions. What SQL statement would you use? ALTER LOGIN James DISABLE DROP LOGIN James DENY CREATE DATABASE To James REVOKE CREATE DATABASE To James GRANT CREATE DATABASE To James Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 1 3) 3 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Enhance Your Browsing with Hyperwords in Firefox

    - by Asian Angel
    While browsing it is easy to find information that you would like to know more about, convert, or translate. The Hyperwords extension provides access to these types of resources and more in Firefox. Using Hyperwords Once the extension has finished installing you will be presented with a demo video that will let you learn more about how Hyperwords works. For our first example we chose to look for more information concerning “WASP-12b” using Wikipedia. Notice the small bluish circle on the lower right of the highlighted term…it is the default access for the Hyperwords menu (access by hovering mouse over it). If you hover over the the Wikipedia (or other) link you can access the information in a scrollable popup window. Or if you prefer click on the link to view the information in a new tab. Choose the style that best suits your needs. Hyperwords is extremely useful for quick unit conversions. Suppose you want to share a news story that you have found while browsing. Highlight the title, access Hyperwords, and choose your preferred sharing source. You may need to authorize access for Hyperwords to post to your account. Once you have authorized access you can start sharing those links very easily. This is just a small sampling of Hyperwords many useful features. Preferences Hyperwords has a nice set of preferences available to help you customize it. Alter the menu popup style, add or remove menu entries, and modify other functions for Hyperwords. Conclusion Hyperwords makes a nice addition to Firefox for anyone needing quick access to search, reference, translation, and other services while browsing. Links Download the Hyperwords extension (Mozilla Add-ons) Download the Hyperwords extension (Extension Homepage) Similar Articles Productive Geek Tips Switch to Private Browsing Mode Easily with Toggle Private BrowsingPreview Tabs in Firefox with Tab Preview 0.3You Really Want to Completely Disable Tabs in Firefox?Quick Hits: 11 Firefox Tab How-TosWhen to Use Protect Tab vs Lock Tab in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files

    Read the article

  • Getting started with Document Set in SharePoint2010

    - by ybbest
    Folders are widely used in traditional file based system, in SharePoint world you can create folder in the document library as well. However, there is a new improved feature in SharePoint called Document Set; you can attach metadata to the document set. To get start with Document set, you can perforce the following steps. 1. Go to Site Settings >>Site collection features >>Activate the Document Sets feature. 2. After the Document Sets feature is activated, you will get a new content type called Document Set. 3. Next, we can create a custom content type called Loan Application Document Set that inherited from Document Set Content Type. 4. Then I create a new column called Application Number. 5. Add this field to the loan application content type 6. Create a new Content Type called Loan Contract form that inherited from Document content type. 7. Add the Application Number to the Loan Contract form content type. 8. Create a new Content Type called Loan Application form that inherited from Document content type and add Application Number to it.(The same step as above.) 9.Go to the Loan Application Document Set content type and go to the Document Set Settings. 10. You can define which content type you would like this Document set contains and you can also define the default document for each content type. When you create a new document set, those default documents will get automatically created in the document set. You can also define the Shared field that shared across content types; in my case I define the Application number and description as my shared fields. Finally, you can define the fields that you’d like to show in the document set welcome page. 11. Now create a new document library and attach those content types to the document library and create a new loan application document set. 12. You will see the default document created in the document set.If you updated Application Number on the document set , the field will get updated in the documents inside the document set as well.

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Partner Blog: aurionPro SENA - Mobile Application Convenience, Flexibility & Innovation Delivered

    - by Darin Pendergraft
    About the Writer: Des Powley is Director of Product Management for aurionPro SENA inc. the leading global Oracle Identity and Access Management specialist delivery and product development partner. In October 2012 aurionPro SENA announced the release of the Mobile IDM application that delivers key Identity Management functions from any mobile device. The move towards an always on, globally interconnected world is shifting Business and Consumers alike away from traditional PC based Enterprise application access and more and more towards an ‘any device, same experience’ world. It is estimated that within five years in many developing regions of the world the PC will be obsolete, replaced entirely by cheaper mobile and tablet devices. This will give a vast amount of new entrants to the Internet their first experience of the online world, and it will only be via these newer, mobile access channels. Designed to address this shift in working and social environments and released in October of 2012 the aurionPro SENA Mobile IDM application directly addresses this emerging market and requirement by enhancing administrators, consumers and managers Identity Management (IDM) experience by delivering a mobile application that provides rapid access to frequently used IDM services from any Mobile device. Built on the aurionPro SENA Identity Service platform the mobile application uses Oracle’s Cloud, Mobile and Social capabilities and Oracle’s Identity Governance Suite for it’s core functions. The application has been developed using standards based API’s to ensure seamless integration with a client’s on premise IDM implementation or equally seamlessly with the aurionPro SENA Hosted Identity Service. The solution delivers multi platform support including iOS, Android and Blackberry and provides many key features including: • Providing easy to access view all of a users own access privileges • The ability for Managers to approve and track requests • Simply raising requests for new applications, roles and entitlements through the service catalogue This application has been designed and built with convenience and security in mind. We protect access to critical applications by enforcing PIN based authentication whilst also providing the user with mobile single sign on capability. This is just one of the many highly innovative products and services that aurionPro SENA is developing for our clients as we continually strive to enhance the value of their investment in Oracle’s class leading 11G R2 Identity and Access Management suite. The Mobile IDM application is a key component of our Identity Services Suite that also includes Managed, Hosted and Cloud Identity Services. The Identity Services Suite has been designed and built specifically to break the barriers to delivering Enterprise, Mobile and Social Identity Management services from the Cloud. aurionPro SENA - Building next generation Identity Services for modern enterprises. To view the app please visit http://youtu.be/btNgGtKxovc For more information please contact [email protected]

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • ColdFusion Search with Excel Sheet creation

    - by user71602
    I created a form search with two buttons - one for results on the web page and one to create an excel sheet. Problem: When a user conducts a search with results on the web page - the user would have to select the criteria again then click the create excel button to create the sheet. How can I work it, so the user after conducting a search can just click one button to create the excel sheet? (Using "Post" for the form)

    Read the article

  • SQL SERVER – QUOTED_IDENTIFIER ON/OFF Explanation and Example – Question on Real World Usage

    - by Pinal Dave
    This is a follow up blog post of SQL SERVER – QUOTED_IDENTIFIER ON/OFF and ANSI_NULL ON/OFF Explanation. I wrote that blog six years ago and I had plans that I will write a follow up blog post of the same. Today, when I was going over my to-do list and I was surprised that I had an item there which was six years old and I never got to do that. In the earlier blog post I wrote about exploitation of the Quoted Identifier and ANSI Null. In this blog post we will see a quick example of Quoted Identifier. However, before we continue this blog post, let us see a refresh what both of Quoted Identifider do. QUOTED IDENTIFIER ON/OFF This option specifies the setting for use of double quotes. When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. In simple words when we have QUOTED IDENTIFIER ON, anything which is wrapped in double quotes becomes an object. E.g. -- The following will work SET QUOTED_IDENTIFIER ON GO CREATE DATABASE "Test1" GO -- The following will throw an error about Incorrect syntax near 'Test2'. SET QUOTED_IDENTIFIER OFF GO CREATE DATABASE "Test2" GO This feature is particularly helpful when we are working with reserved keywords in SQL Server. For example if you have to create a database with the name VARCHAR or INT or DATABASE you may want to put double quotes around your database name and turn on quoted identifiers to create a database with the such name. Personally, I do not think so anybody will ever create a database with the reserve keywords intentionally, as it will just lead to confusion. Here is another example to give you further clarity about how Quoted Idenifier setting works with SELECT statement. -- The following will throw an error about Invalid column name 'Column'. SET QUOTED_IDENTIFIER ON GO SELECT "Column" GO -- The following will work SET QUOTED_IDENTIFIER OFF GO SELECT "Column" GO Personally, I always use the following method to create database as it works irrespective of what is the quoted identifier’s status. It always creates objects with my desire name whenever I would like to create. CREATE DATABASE [Test3] I believe the future of the quoted identifier on or off is useful in the real world when we have script generated from another database where this setting was ON and we have to now execute the same script again in our environment again. Question to you - I personally have never used this feature as I mentioned earlier. I believe this feature is there to support the scripts which are generated in another SQL Database or generate the script for other database. Do you have a real world scenario where we need to turn on or off Quoted Identifiers. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building an MVC application using QuickBooks

    - by dataintegration
    RSSBus ADO.NET Providers can be used from many tools and IDEs. In this article we show how to connect to QuickBooks from an MVC3 project using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process can be used with any of our ADO.NET Providers. Creating the Model Step 1: Download and install the QuickBooks Data Provider from RSSBus. Step 2: Create a new MVC3 project in Visual Studio. Add a data model to the Models folder using the ADO.NET Entity Data Model wizard. Step 3: Create a new RSSBus QuickBooks Data Source by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select all the tables and views you need, and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Creating the Controller and the Views Step 6: Add a new controller to the Controllers folder. Give it a meaningful name, such as ReceivePaymentsController. Also, make sure the template selected is 'Controller with empty read/write actions'. Before adding new methods to the Controller, create views for your model. We will add the List, Create, and Delete views. Step 7: Right click on the Views folder and go to Add -> View. Here create a new view for each: List, Create, and Delete templates. Make sure to also associate your Model with the new views. Step 10: Now that the views are ready, go back and edit the RecievePayment controller. Update your code to handle the Index, Create, and Delete methods. Sample Project We are including a sample project that shows how to use the QuickBooks Data Provider in an MVC3 application. You may download the C# project here or download the VB.NET project here. You will also need to install the QuickBooks ADO.NET Data Provider to run the demo. You can download a free trial here. To use this demo, you will also need to modify the connection string in the 'web.config'.

    Read the article

  • What types of objects are useful in SQL CLR?

    - by Greg Low
    I've had a number of people over the years ask about whether or not a particular type of object is a good candidate for SQL CLR integration. The rules that I normally apply are as follows: Database Object Transact-SQL Managed Code Scalar UDF Generally poor performance Good option when limited or no data-access Table-valued UDF Good option if data-related Good option when limited or no data-access Stored Procedure Good option Good option when external access is required or limited data access DML...(read more)

    Read the article

  • Using Variables Within Crystal Report Formulas

    This article demonstrates how to create formulas in a Crystal Report and use the Crystal scripting language to create variables, use built in functions, perform conditional logic, and manipulate dates. After a brief introduction, the article provides the steps required to create the database, the website, and the report, including how to add fields and formulas to the report. Near the end, the article examines the steps required to create formulas with variables.

    Read the article

  • nginx 404 logs to all virtualhosting logs

    - by Dr.D
    I am using nginx and i have two sites running: site1 = /1/access.log site2 = /2/access.log when a user get 404 images not found on site2 nginx writes to access.log of site1 & site2 reporting the not found error, i tried everything to get separated logs without luck, i want everything that happen on site1 logged on /1/access.log and everything that happens on site2 logged on /2/access.log any help ?

    Read the article

  • How do I set up an IP address on a Linux VM running in VM Player so I can access it from my Windows 7 host?

    - by BradyKelly
    I have just installed an Openbravo appliance on my Windows 7 VM Player host. I am now staring at a command prompt that tells me to go to http://localhost to access the ERP system, but I cannot find any browser on the appliance. I am guessing I should rather follow their advice to configure an IP address for the Linux VM and just access that from a Windows browser on my host. How do I go about this? More specifically, How do I choose a local IP address to assign? How do I set things up so that this IP address is visible to my Windows host? Their help says to assign an DNS, to make the server visible to the internet, but internet visibility per se is not needed. How should I interpret or adapt this help for that? Finally to make the IP address available to the Internet, assign some DNS servers to it: $ echo "nameserver IP_DNS1" /etc/resolv.conf $ echo "nameserver IP_DNS2" /etc/resolv.conf

    Read the article

  • Centos 5.5 [Read-only file system] issue after rebooting

    - by canu johann
    I have a virtual server under centos 5.5 (hosted by a japanese company called sakura ) Since yesterday, connection through ssh couldn't be established. I've contacted support center who told me to restart VS from the control panel. After restarting, I got the message below Connected to domain wwwxxxxxx.sakura.ne.jp Escape character is ^] [ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) @@cat: /proc/self/attr/current: Invalid argument Welcome to CentOS Starting udev: @[ OK ] Setting hostname localhost.localdomain: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/vda3 / contains a file system with errors, check forced. /: Inodes that were part of a corrupted orphan linked list found. /: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] *** An error occurred during the file system check. *** Dropping you to a shell; the system will reboot *** when you leave the shell. *** Warning -- SELinux is active *** Disabling security enforcement for system recovery. *** Run 'setenforce 1' to reenable. /etc/rc.d/rc.sysinit: line 53: /selinux/enforce: Read-only file system Give root password for maintenance (or type Control-D to continue): bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system bash: cannot create temp file for here-document: Read-only file system (Repair filesystem) 1 # setenforce 1 setenforce: SELinux is disabled (Repair filesystem) 2 # echo 1 (Repair filesystem) 4 # /etc/init.d/sshd status openssh-daemon is stopped (Repair filesystem) 5 # /etc/init.d/sshd start Starting sshd: NET: Registered protocol family 10 lo: Disabled Privacy Extensions touch: cannot touch `/var/lock/subsys/sshd': Read-only file system (Repair filesystem) 6 # sudo /etc/init.d/sshd start sudo: sorry, you must have a tty to run sudo (Repair filesystem) 7 # I have 4 site in production and I need to restart the server quickly (SSH + HTTPD ,...). Thank you for your time.

    Read the article

  • Sharepoint : send alert when field is empty : bug ?

    - by mathieu
    Is it possible to send an email alert when a field of a list is empty ? I've tried the following : Create a custom list, add a field named "TestField" Create a personal view named "TestView", filter : Show when column "TestField" is equal to "" (leave the box empty) Create an alert, immediate email when items appearing in "TestView" are modified Create an item with both fields filled Create an item with only title filled Now you should receive two alert emails, but in the view "TestView" there is only one item. Is it a bug ?

    Read the article

  • Insert Error:CREATE DATABASE permission denied in database 'master'. cannot attach the file

    - by user1300580
    i have a register page on my website I am creating and it saves the data entered by the user into a database however when I click the register button i am coming across the following error: Insert Error:CREATE DATABASE permission denied in database 'master'. Cannot attach the file 'C:\Users\MyName\Documents\MyName\Docs\Project\SJ\App_Data\SJ-Database.mdf' as database 'SJ-Database'. These are my connection strings: <connectionStrings> <add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient"/> <add name="MyConsString" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|SJ-Database.mdf; Initial Catalog=SJ-Database; Integrated Security=SSPI;" providerName="System.Data.SqlClient" /> </connectionStrings> Register page code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.SqlClient; public partial class About : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } public string GetConnectionString() { //sets the connection string from your web config file "ConnString" is the name of your Connection String return System.Configuration.ConfigurationManager.ConnectionStrings["MyConsString"].ConnectionString; } private void ExecuteInsert(string name, string gender, string age, string address, string email) { SqlConnection conn = new SqlConnection(GetConnectionString()); string sql = "INSERT INTO Register (Name, Gender, Age, Address, Email) VALUES " + " (@Name,@Gender,@Age,@Address,@Email)"; try { conn.Open(); SqlCommand cmd = new SqlCommand(sql, conn); SqlParameter[] param = new SqlParameter[6]; //param[0] = new SqlParameter("@id", SqlDbType.Int, 20); param[0] = new SqlParameter("@Name", SqlDbType.VarChar, 50); param[1] = new SqlParameter("@Gender", SqlDbType.Char, 10); param[2] = new SqlParameter("@Age", SqlDbType.Int, 100); param[3] = new SqlParameter("@Address", SqlDbType.VarChar, 50); param[4] = new SqlParameter("@Email", SqlDbType.VarChar, 50); param[0].Value = name; param[1].Value = gender; param[2].Value = age; param[3].Value = address; param[4].Value = email; for (int i = 0; i < param.Length; i++) { cmd.Parameters.Add(param[i]); } cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } catch (System.Data.SqlClient.SqlException ex) { string msg = "Insert Error:"; msg += ex.Message; throw new Exception(msg); } finally { conn.Close(); } } protected void cmdRegister_Click(object sender, EventArgs e) { if (txtRegEmail.Text == txtRegEmailCon.Text) { //call the method to execute insert to the database ExecuteInsert(txtRegName.Text, txtRegAge.Text, ddlRegGender.SelectedItem.Text, txtRegAddress.Text, txtRegEmail.Text); Response.Write("Record was successfully added!"); ClearControls(Page); } else { Response.Write("Email did not match"); txtRegEmail.Focus(); } } public static void ClearControls(Control Parent) { if (Parent is TextBox) { (Parent as TextBox).Text = string.Empty; } else { foreach (Control c in Parent.Controls) ClearControls(c); } } }

    Read the article

  • ask help: I need to create a demo application in Java to test my new designed Java Api

    - by Christophe
    A new programmer needs yourhelp. I'm working on a porject of re-developement of Java driver for the company's PIN pad terminals. the Java Api (CPXApplicationUpdate) will allow the applications in pinpads to be updated and to be downloaded at different speed (Baud rate). the Java Api was created. i followed the protocolto build the message. the message is to send to a RS-232 port. i'm trying to use setter and getter to let the code work as an API. import java.io.InputStream; import java.io.OutputStream; import java.util.Properties; public class CPXApplicationUpdate extends CPXCommand { private int speed; // TODO: temporary variable for baud rate test stub. // speed: baud rate of the maintenance application performing DL /** Creates a new instance of CPXApplicationUpdate */ public CPXApplicationUpdate() { speed = 9600; // no baud rate change CPXProcessor.logger.fine("-------CPXApplicationUpdate constructor"); // setParam("timeout", _cmdInfo.getDefaultParValue("timeout")); } public CPXApplicationUpdate(int speedinit) { speed = speedinit; // TODO: where to get the speed? wait for user input. CPXProcessor.logger.fine("-------CPXApplicationUpdate constructor"); // setParam("timeout", _cmdInfo.getDefaultParValue("timeout")); } public void setSpeed(int speed){ this.speed = speed; } protected void buildRequest() throws ElitePortException { String trans = ""; // build the full-qualified message trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 5); trans = addToRequest(trans, (char) 6); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 2); switch (speed) { case 9600: trans = addToRequest(trans, (char) 0x09); break; case 19200: trans = addToRequest(trans, (char) 0x0A); break; case 115200: trans = addToRequest(trans, (char) 0x0E); break; default: // TODO: unexpected baud rate. throw(); break; } trans = EncryptBinary(trans); trans = "F0." + trans; wrapRequest(trans); } protected String addToRequest(String req, char c) { CPXProcessor.logger.fine("adding char to request: I-" + (int) c + " C-" + c + " H-" + Integer.toHexString((int) c)); return req + c; } protected String addToRequest(String req, String s) { CPXProcessor.logger.fine("adding String to request: S-" + s); return req + s; } protected String addToRequest(String req) { return req; } public void analyzeResponse() { String response_transaction, response; int absLen = _response.length(); if (absLen < 4) return; response = _response.substring(3); CPXProcessor.logger.fine("stripped response=[" + response + "]"); for (int i = 0; i < response.length(); i++) { char c = response.charAt(i); CPXProcessor.logger.fine("[" + i + "] = " + c + " <> " + Integer.toHexString((int) c)); } int status = (short) response.charAt(3); CPXProcessor.logger.fine("status = " + status); _outputValues.put("status", "" + status); } please help me to correct the code. Now, i need to create a demo application to test if this java driver (Java Api) works. the value of the speed can be input by users (command line), or creat property files. how can i do that?

    Read the article

  • C# IOException: The process cannot access the file because it is being used by another process.

    - by Michiel Bester
    Hi, I have a slight problem. What my application is supose to do, is to watch a folder for any newly copied file with the extention '.XSD' open the file and assign the lines to an array. After that the data from the array should be inserted into a MySQL database, then move the used file to another folder if it's done. The problem is that the application works fine with the first file, but as soon as the next file is copied to the folder I get this exception for example: 'The process cannot access the file 'C:\inetpub\admission\file2.XPD' because it is being used by another process'. If two files on the onther hand is copied at the same time there's no problem at all. The following code is on the main window: public partial class Form1 : Form { static string folder = specified path; static FileProcessor processor; public Form1() { InitializeComponent(); processor = new FileProcessor(); InitializeWatcher(); } static FileSystemWatcher watcher; static void InitializeWatcher() { watcher = new FileSystemWatcher(); watcher.Path = folder; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.EnableRaisingEvents = true; watcher.Filter = "*.XPD"; } static void watcher_Created(object sender, FileSystemEventArgs e) { processor.QueueInput(e.FullPath); } } As you can see the file's path is entered into a queue for processing which is on another class called FileProcessor: class FileProcessor { private Queue<string> workQueue; private Thread workerThread; private EventWaitHandle waitHandle; public FileProcessor() { workQueue = new Queue<string>(); waitHandle = new AutoResetEvent(true); } public void QueueInput(string filepath) { workQueue.Enqueue(filepath); if (workerThread == null) { workerThread = new Thread(new ThreadStart(Work)); workerThread.Start(); } else if (workerThread.ThreadState == ThreadState.WaitSleepJoin) { waitHandle.Set(); } } private void Work() { while (true) { string filepath = RetrieveFile(); if (filepath != null) ProcessFile(filepath); else waitHandle.WaitOne(); } } private string RetrieveFile() { if (workQueue.Count > 0) return workQueue.Dequeue(); else return null; } private void ProcessFile(string filepath) { string xName = Path.GetFileName(filepath); string fName = Path.GetFileNameWithoutExtension(filepath); string gfolder = specified path; bool fileInUse = true; string line; string[] itemArray = null; int i = 0; #region Declare Db variables //variables for each field of the database is created here #endregion #region Populate array while (fileInUse == true) { FileStream fs = new FileStream(filepath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); StreamReader reader = new StreamReader(fs); itemArray = new string[75]; while (!reader.EndOfStream == true) { line = reader.ReadLine(); itemArray[i] = line; i++; } fs.Flush(); reader.Close(); reader.Dispose(); i = 0; fileInUse = false; } #endregion #region Assign Db variables //here all the variables get there values from the array #endregion #region MySql Connection //here the connection to mysql is made and the variables are inserted into the db #endregion #region Test and Move file if (System.IO.File.Exists(gfolder + xName)) { System.IO.File.Delete(gfolder + xName); } Directory.Move(filepath, gfolder + xName); #endregion } } The problem I get occurs in the Populate array region. I read alot of other threads and was lead to believe that by flushing the file stream would help... I am also thinking of adding a try..catch for if the file process was successful, the file is moved to gfolder and if it failed, moved to bfolder Any help would be awesome Tx

    Read the article

  • Can i create different observables and different corresponding observers in java?

    - by mithun1538
    Hello everyone, Currently, I have one observable and many observers. What i need is different observables, and depending on the observable, different observers. How do I achieve this? ( For understanding, assume I have different apples - say apple1 apple2... I have observer_1 observing apple1, observer_2 observing apple2, observer_3 observing apple 2 and so on..). I tried creating different objects of the Observable class, but since observers are observing the same class of observable, I don't know how to access a particular instance of the Observable. I have included the following servlet code that contains Observer and Observable classes: public class CustomerServlet extends HttpServlet { public String getNextMessage() { // Create a message sink to wait for a new message from the // message source. return new MessageSink().getNextMessage(source); } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); String recMSG = getNextMessage(); dout.writeObject(recMSG); dout.flush(); } public void broadcastMessage(String message) { // Send the message to all the HTTP-connected clients by giving the // message to the message source source.sendMessage(message); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { ObjectInputStream din= new ObjectInputStream(request.getInputStream()); String message = (String)din.readObject(); ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); dout.writeObject("1"); dout.flush(); if (message != null) { broadcastMessage(message); } // Set the status code to indicate there will be no response response.setStatus(response.SC_NO_CONTENT); } catch (Exception e) { e.printStackTrace(); } } @Override public String getServletInfo() { return "Short description"; }// </editor-fold> MessageSource source = new MessageSource(); } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSink implements Observer { String message = null; // set by update() and read by getNextMessage() // Called by the message source when it gets a new message synchronized public void update(Observable o, Object arg) { // Get the new message message = (String)arg; // Wake up our waiting thread notify(); } // Gets the next message sent out from the message source synchronized public String getNextMessage(MessageSource source) { // Tell source we want to be told about new messages source.addObserver(this); // Wait until our update() method receives a message while (message == null) { try { wait(); } catch (Exception e) { System.out.println("Exception has occured! ERR ERR ERR"); } } // Tell source to stop telling us about new messages source.deleteObserver(this); // Now return the message we received // But first set the message instance variable to null // so update() and getNextMessage() can be called again. String messageCopy = message; message = null; return messageCopy; } }

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >