Search Results

Search found 15731 results on 630 pages for 'showplan xml'.

Page 579/630 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Web Services Example - Part 1: Declarative

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 1 of our Web Service examples. In this posting we'll take a look at using a declarative SOAP Web Service. Getting the sample code: Just click here to download a zip of the entire project. You can unzip it and load it into JDeveloper and deploy it either to iOS or Android. Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed. Defining our Web Service: First off, we should mention that this sample code is using a public web service provided free by CDYNE Corporation that provides weather forecasts by zipcode. Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. Let's take a look at the web service.  We created this by using the "Web Service Data Control" from the New Gallery and using this link to this wsdl:  "http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL"   This web service has several methods but we're interested in GetCityForecastByZIP which takes a single string parameter for the zipcode and the second method, GetWeatherInformation that enumerates all possible forecast descriptions and associated image URLs.  The latter we'll use in the next edition but we included it here for completeness. Defing the Application: After adding a feature to the adfmf-feature.xml file, we added a taskflow to host the application flow.  This comprises of a home screen with a list with items for each method in the web service, "Forecast by Zip" and "Weather Info".  In this application we've also decided to hide the navigation bar since there is only one feature in the application. Forecast by Zip: The "Forecast By ZIP" option first presents the user with a screen where they can enter a zipcode and when the "Search" button is tapped, it executes the GetCityForecastByZIP method.  This is done by binding an Action binding to that method. The easiest way to accomplish this is to just drag & drop the method from the Data Control palette to the AMX page and drop it as a button and let the framework hook it up for you.  There is an inputText component on the page that is bound to a pageFlowScope variable called "zip".  This is used as the parameter to the Action binding when it is executed.  Because the actionListener attribute of the commandButton executes the Web Service each time, we ensure that the method is invoked every time the button is clicked. Weather Info: Unlike the previous method, this time instead of explictly executing the web service method we are using deferred invocation.  What this means is that we will bind to the results of the method and the framework will execute the method when it the data is required to be rendered.  We do this by simply doing a drag & drop of the results of the GetWeatherInformation to the AMX page.  When the page is rendered and the bindings are resolved the framework invokes the method.  This executes the method only when it is needed and fills the Data Control provider.  Because we never re-execute the method, you can click from Home to Weather Info and back many times and the web service is only ever invoked once. Issues and Possible Improvements: One thing you will quickly realize with this example is that the error handling is done by the framework for you. For simple examples this is fine but for real applications you'll want to customize these error messages.  With the declarative invocation of web services, this is difficult.  This is one aspect we'll address in the second installment of the web service examples where we will show you how to do programmatic invocation which allows you better error handling. Another issue you will notice with this example is that we can enumerate the weather information but there isn't an easy way to use that information to show the corresponding description and image as part of the forecast results.  We'll show you how to do this in the next example.

    Read the article

  • Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1

    - by mbcrump
    Introduction Now that the dust has settled and everyone has downloaded and installed Visual Studio 2010 Service Pack 1, its time to talk about a new feature included that will help Silverlight Developers profile their applications. Let’s take a look at what the official documentation says about it: Performance Wizard for Silverlight – taken from VS2010 SP1 KB. Visual Studio 2010 SP1 enables you to tune the Silverlight application performance by profiling the code. A traditional code profiler cannot tune the rendering performance for Silverlight applications. Many higher-level profilers are added to Visual Studio 2010 SP1 so that you can better determine which parts of the application consume time. So, how do you do it? After you finish installing VS2010 SP1, make sure it took by going to Help –> About. You should see SP1Rel under Visual Studio 2010 as shown below. Now, that we have verified you are on the most current release, let’s load up a Silverlight Application. I’m going to take my hobby Silverlight project that I created a month or so ago. The reason that I’m picking this project is that I didn’t focus so much on performance as it was just built for fun and to see what I could do with Silverlight. I believe this makes the perfect application to profile.  After the project is loaded, click on Analyze then Launch Performance Wizard. Go ahead and click on CPU Sampling (recommended). You will notice that it ask which application to target. By Default, it will select the .Web project in an Silverlight Application. Go ahead and leave the default Web Project checked. We are going to leave the client as Internet Explorer. Now, go ahead and click finish. Now your Silverlight Application will launch. While your application is running, you will see the following inside of Visual Studio 2010. Here is where you will need to attach your Silverlight Application to the web application that is current being profiled. Simply click on the  Attach/Detach button below and find your application to attach to the profiler. In my case, I am using IE8 and could find it by the title. After you close your browser, you will notice it generated a report: These files will end with a .VSP If you click on the .VSP you will it generated the following report: We could turn off “Just My Code” but it may pick up things that we didn’t want to profile as shown below: One other feature to note is that you may want to export the data to a CSV or XML. You can do that by looking at the toolbar and clicking the button highlighted below. Conclusion The profiler for Silverlight is a great addition to an already great product. So before you ship a Silverlight Application run it through the profile and see what comes up. Since its included and free I can’t see a reason not to do this. Thanks again for reading and I hope you subscribe to my blog or follow me on Twitter for more Silverlight/WP7 fun.  Subscribe to my feed

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Welcome to BlogEngine.NET 2.9 using Microsoft SQL Server

    If you see this post it means that BlogEngine.NET 2.9 is running and the hard part of creating your own blog is done. There is only a few things left to do. Write Permissions To be able to log in to the blog and writing posts, you need to enable write permissions on the App_Data folder. If you’re blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support. You need write permissions on the App_Data folder because all posts, comments, and blog attachments are saved as XML files and placed in the App_Data folder.  If you wish to use a database to to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine wiki to get started. Security When you've got write permissions to the App_Data folder, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.  Passwords are hashed by default so if you lose your password, please see the BlogEngine wiki for information on recovery. Configuration and Profile Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET 2.9 is set up to take full advantage of of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this. Themes, Widgets & Extensions One last thing to consider is customizing the look of your blog.  We have a few themes available right out of the box including two fully setup to use our new widget framework.  The widget framework allows drop and drag placement on your side bar as well as editing and configuration right in the widget while you are logged in.  Extensions allow you to extend and customize the behaivor of your blog.  Be sure to check the BlogEngine.NET Gallery at dnbegallery.org as the go-to location for downloading widgets, themes and extensions. On the web You can find BlogEngine.NET on the official website. Here you'll find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.  Again, new themes, widgets and extensions can be downloaded at the BlogEngine.NET gallery. Good luck and happy writing. The BlogEngine.NET team

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • Disabling the right-click sub menu using JQuery

    - by nikolaosk
    Recently I needed to disable the right-click contextual menu in an HTML page for a very simple HTML application I was creating for a friend.This is going to be a short post where I will demonstrate how to disable the right-click contextual menu.I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadPlease find here all my posts regarding JQuery.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. I am going to create a very simple HTML 5 page with some text and an image. The HTML markup for the page follows. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">     <script type="text/javascript" src="jquery-1.8.2.min.js">        </script><script type="text/javascript"> (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); </script>       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <h2>HTML 5, JQuery, CSS3</h2>    </div>      <figure>  <img src="html5.png" alt="HTML 5"></figure>        <div id="main">          <h2>HTML 5</h2>                        <article>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          </article>      </div>             </body>  </html> This is the JQuery code, I use (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); I simply disable/cancel the contextmenu event.When I load the simple page on the browser and I right-click the context menu does not appear.Hope it helps!!!

    Read the article

  • Xenserver 5.6 SR_BACKEND_FAILURE_47 no such volume group, but it is there

    - by Juan Carlos
    I've looked everywhere (Google, here, a bunch of other sites), and while I have found people with similar problems, I couldn't find a single one with a solution to this. Last night our xenserver 5.6 box corrupted the /var/xapi/state.db, and I couldn't fix the xml, no matter what I did. After a good hour fiddling with the file, I figured it would be faster to just reinstall. The server had one 2tb hard drive running Xen and its VMs, and since Xen's install said it would erase the hard drive it was installed on, I plugged a new harddrive and installed Xen on it, without selecting any hard drives for storage. I Figured I could make it happen after install, using the partition on the old harddrive with all my VMs on it. After instalation finished and the system booted I did: #fdisk -l found the old partition at /dev/sda3 #ll /dev/disk/by-id found the partition at /dev/disk/by-id/scsi-3600188b04c02f100181ab3a48417e490-part3 #xe host-list uuid ( RO) : a019d93e-4d84-4a4b-91e3-23572b5bd8a4 name-label ( RW): xenserver-scribfourteen name-description ( RW): Default install of XenServer #pvscan PV /dev/sda3 VG VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d lvm2 [1.81 TB / 204.85 GB free] Total: 1 [1.81 TB] / in use: 1 [1.81 TB] / in no VG: 0 [0 ] #vgscan Reading all physical volumes. This may take a while... Found volume group "VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d" using metadata type lvm2 # pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d PV Size 1.81 TB / not usable 6.97 MB Allocatable yes PE Size (KByte) 4096 Total PE 474747 Free PE 52441 Allocated PE 422306 PV UUID U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe sr-introduce name-label="VMs" type=lvm uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW name-description="VMs Local HD Storage" content-type=user shared=false device-config=:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW # xe pbd-create host-uuid=a019d93e-4d84-4a4b-91e3-23572b5bd8a4 sr-uuid=U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW device-config:device=/dev/disk/by-id/scsi-3600188b04c02f100181ab3a483f9f0ae-part3 adf92b7f-ad40-828f-0728-caf94d2a0ba1 # xe pbd-plug uuid=adf92b7f-ad40-828f-0728-caf94d2a0ba1 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW] At this point I did a # vgrename VG_XenStorage-405a2ece-d10e-d6c5-ede2-e1ad2c29c68d VG_XenStorage-U03Gt9-WtHi-8Nnu-QB2Q-c7BV-CO9A-cFpYWW cause the VG name was different, but pdb-plug still gives me the same error. So, now I'm kinda lost about what to do, I'm not used to Xen and most sites I've been finding are really unhelpful. I hope someone can guide me in the right way to fix this. I cant lose those VMs (got backups, but from inside the guests, not the VMs themselves).

    Read the article

  • Task Scheduler permissions error for some jobs

    - by MaseBase
    I have recently moved to a 64-bit Windows Server 2008 R2. I setup my Scheduled Tasks to run under one user (TaskUser) specifically created for the scheduler and most run just fine. However some of them do not run under TaskUser but will for my own credentials. Here is the Event Log entry I found, which from my research points me to believe that it doesn't have permissions, but it does. It also has the option "Run with highest privileges" checked on. I have seen this particular checkbox work wonders on some tasks, but I have a number of them that it's not helping for. The error is ERROR_ELEVATION_REQUIRED but the user is a member of the administrators group and has folder/file permission and is set to "Run with highest privileges" Log Name: Microsoft-Windows-UAC/Operational Source: Microsoft-Windows-UAC Date: 4/27/2010 2:21:44 PM Event ID: 1 Task Category: (1) Level: Error Keywords: User: LIVE\TaskUser Computer: www2 Description: The process failed to handle ERROR_ELEVATION_REQUIRED during the creation of a child process. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-UAC" Guid="{E7558269-3FA5-46ED-9F4D-3C6E282DDE55}" /> <EventID>1</EventID> <Version>0</Version> <Level>2</Level> <Task>1</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2010-04-27T21:21:44.407053800Z" /> <EventRecordID>19</EventRecordID> <Correlation /> <Execution ProcessID="2460" ThreadID="5960" /> <Channel>Microsoft-Windows-UAC/Operational</Channel> <Computer>www2</Computer> <Security UserID="S-1-5-21-4017510424-2083581016-1307463562-1640" /> </System> <EventData></EventData> </Event> The errors shown in the Task Scheduler History tab display these results and states This operation requires an interactive window station. (0x800705B3) EventID 103 Task Scheduler failed to launch action "F:\App\Path\ConsoleApp.exe" in instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of task "\taskFolder\taskName". Additional Data: Error Value: 2147943859. EventID 203 Task Scheduler failed to start instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of "\taskFolder\taskName" task for user "LIVE\TaskUser" . Additional Data: Error Value: 2147943859.

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • IIS 7.5 Error 1007

    - by darkdog
    I have a Windows 2008 R2 Virtual Server. I removed the "Default Web Site" using IIS Manager and now all my other sites now report an error saying they can't access the site (or something similar). I thought I could just remove the role, re-install IIS again and start with a clean slate. After I re-installed IIS it's now reporting the following error in the event log: The World Wide Web Publishing Service (WWW service) failed to register the URL prefix of "http://www.mash-guild.com:80:192.168.245.132/" for the website of "4". The required network connectivity may already be used. The site has been disabled. The data field contains the error number. This is the full version with the german error note (I'm from germany): Der WWW-Publishingdienst (WWW-Dienst) konnte das URL-Präfix "http://www.mash-guild.com:80:192.168.245.132/" für die Website "4" nicht registrieren. Die erforderliche Netzwerkverbindung wird möglicherweise bereits verwendet. Die Website wurde deaktiviert. Das Datenfeld enthält die Fehlernummer. Ereignis-XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC" Guid="{05448E22-93DE-4A7A-BBA5-92E27486A8BE}" EventSourceName="W3SVC" /> <EventID Qualifiers="49152">1007</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-03-26T16:14:56.000000000Z" /> <EventRecordID>1435</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>WIN-DCJ8SN0QI5J</Computer> <Security /> </System> <EventData> <Data Name="UrlPrefix">http://www.mash-guild.com:80:192.168.245.132/</Data> <Data Name="SiteID">4</Data> <Binary>B7000780</Binary> </EventData> </Event> I would like to know if there is a way to fix this or just get back to the IIS + Server settings I had just after I installed windows 2008 ?

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Windows 7 + Deep Freeze - I'm stuck in an endless reboot loop

    - by myermian
    I have the following setup: Windows 7 Ultimate Deep Freeze I "thawed" my machine last night and performed a Windows Update. The update is having issues (it gets stuck at 32%, fails, and restarts my machine). When it reboots it attempts it again, and again, and again, etc. (Endless loop). I looked online and found some solutions, but none of them seem to be working: When I run Safe Mode, Safe Mode w/ Network, or Safe Mode w/ Command Prompt it attempts to revert the Windows Update changes. However, the problem is with Deep Freeze on (and now in "Frozen" mode) the reverted changes don't stay, and I'm back into the loop of death. Oh, and side note: "Safe Mode w/ Command Prompt" does not actually take me to a command prompt window? Perhaps because it is attempting to complete the Windows Update changes first? I have tried to select the option to NOT restart when an windows error occurs, but it still does. I tried the remainder of all the other options in the F8 screen. The only other option left is to find my Windows 7 Media Disc (I can't find it right now) and use it to repair windows (because for some reason the repair option does not show up in the F8 screen). Is there a way to disable Deep Freeze from loading? When I selected "Safe Mode w/ Command Prompt" I noticed that it loads the DpFrz.sys file. I know that when I'm in the Windows Boot Manager if I press F10 instead of F8 (while highlighting Windows 7) it takes me to an "Edit Boot Options" screen: Edit Windows boot options for: Windows 7 Path: \Windows\system32\winload.exe Partition: 2 Hard Disk: 8e90e329 [ /NOEXECUTE=OPTIN (I CAN EDIT THIS LINE) ] Update: I found my Windows 7 Media Disk and it did not help out. The laptop had the "System Restore" as a partition on the HDD. I later received (in the mail) a Windows 7 Upgrade Disc from Sony to upgrade my system from Windows Vista to Windows 7 Ultimate. I placed the disc into the DVD drive and it does not come up as a "bootable" disc. I'm going to try to find an alternative disc to see if I can get into Command Prompt. Update 2: I got a Windows Repair disc and got into a command prompt window. I got into the registry and disabled Deep Freeze. Also: I renamed the Pending.xml file to Pending.old I cleared out the Windows Temp directory I still am stuck in the loop (though, it isn't an issue with DeepFreeze anymore because I can make changes to the hard drive and they persist). Not sure what to do at this point? Update 3: I ran the repair option and it couldn't repair, but it did point me to something. It says the error was due to a driver that was failing. I have a feeling it is my UPEK Fingerprint scanner.

    Read the article

  • Internet Explorer 10 (Metro App) on Windows 8 Pro (RTM) crash

    - by ferpaz
    Internet Explorer 10 (Metro App) on Windows 8 Pro (RTM) does not start and crash with this error: Log Name: Application Source: Application Error Date: 27/08/2012 19:21:29 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: DELL-OPE3.red.aseinfo.com.sv Description: Faulting application name: iexplore.exe, version: 10.0.9200.16384, time stamp: 0x50107ebe Faulting module name: iertutil.dll, version: 10.0.9200.16384, time stamp: 0x50109c90 Exception code: 0xc0000005 Fault offset: 0x0000000000172f0b Faulting process id: 0xadc Faulting application start time: 0x01cd84bb737cfa16 Faulting application path: C:\Program Files\Internet Explorer\iexplore.exe Faulting module path: C:\WINDOWS\system32\iertutil.dll Report Id: b1597df3-f0ae-11e1-be78-88532e15da73 Faulting package full name: Faulting package-relative application ID: Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-08-28T01:21:29.000000000Z" /> <EventRecordID>7612</EventRecordID> <Channel>Application</Channel> <Computer>DELL-OPE3.red.aseinfo.com.sv</Computer> <Security /> </System> <EventData> <Data>iexplore.exe</Data> <Data>10.0.9200.16384</Data> <Data>50107ebe</Data> <Data>iertutil.dll</Data> <Data>10.0.9200.16384</Data> <Data>50109c90</Data> <Data>c0000005</Data> <Data>0000000000172f0b</Data> <Data>adc</Data> <Data>01cd84bb737cfa16</Data> <Data>C:\Program Files\Internet Explorer\iexplore.exe</Data> <Data>C:\WINDOWS\system32\iertutil.dll</Data> <Data>b1597df3-f0ae-11e1-be78-88532e15da73</Data> <Data> </Data> <Data> </Data> </EventData> </Event> Any suggestions?

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • LaunchDaemon causing Lion to hang on boot

    - by Brett
    I've got a Mac Mini 2011, which I intend to use for a few tasks such as Plex and running a few VM's. I've installed virtualbox, along with XAMPP and phpvirtualbox, which all worked fine. However, getting this to run on startup is proving a real PITA! I'm at the moment trying to get vboxwebsrv running on boot. I've created a launchd plist within /Library/LaunchDaemons to run it and it works fine... well sort of. Lion when booting will show the spinning wheel and stop, never showing a GUI - however if I remote in via screen sharing or SSH, I can login fine and see that vboxwebsrv has launched successfully. Setting this plist to disabled makes lion boot up fine again. Initially I thought it was due to it staying open, so tried to add -b which causes it to run in the background, this just caused launchd to constantly spawn new processes and didn't even fix my problem of Lion being stuck at the spinning wheel. Does anyone have any ideas? I'm losing my mind here! PLIST: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>KeepAlive</key> <false/> <key>UserName</key> <string>vbox</string> <key>RunAtLoad</key> <true/> <key>OnDemand</key> <false/> <key>Label</key> <string>org.virtualbox.vboxwebsvc</string> <key>ProgramArguments</key> <array> <string>/Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv</string> </array> </dict> </plist>

    Read the article

  • IIS can't load Oracle.Web assembly (for ASP.NET membership provider)

    - by Konamiman
    I am trying to configure an IIS web site to use an Oracle database for ASP.NET membership, but I can't get it to work. IIS doesn't seem to be able to load the assembly containing the Oracle membership provider. That's what I have so far: An Oracle 10g database online and with all the tables for ASP.NET membership created. Windows 2008 R2 Standard with the web server role installed, including support for ASP.NET. Oracle 11g Release 2 ODAC 11.2.0.1.2 installed. The installed components are: Oracle data provider for .NET, Oracle providers for ASP.NET, Oracle instant client. The default web site on IIS (I am using that for testing) has the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <membership defaultProvider="OracleMembershipProvider"> <providers> <remove name="SqlMembershipProvider" /> <add name="OracleMembershipProvider" type="Oracle.Web.Security.OracleMembershipProvider, Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342" connectionStringName="OracleServer" /> </providers> </membership> </system.web> </configuration> (Additional attributes on the "add" element omitted for brevity. Also, the connection string is defined for the whole server.) The Oracle.Web.dll file is on the GAC. That's the relevant part of the C:\Windows\Assembly folder: The web site application pool is configured for .NET 2.0, and has 32-bit applications enabled. I have allowed untrusted providers in the IIS' administration.config file (just for the sake of testing, I'll explicitly add the assembly to the trusted providers list later). With all of this setup in place, when I click on the ".NET Users" icon on the IIS manager, I get a warning about the provider having too much privileges, and when I accept I get the following message: There was an error while performing this operation. Details: Could not load file or assembly 'Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. The system cannot find the file specified. So, what am I missing? How can I get the Oracle membership provider to work? Thank you! UPDATE: It seems that the problem is not with IIS itself, but with the IIS administrator only. When using the web site configuration tool provided by Visual Studio, everything works fine.

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >