Search Results

Search found 16396 results on 656 pages for 'browser extensions'.

Page 393/656 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • Why does switching users completely hang my system every time?

    - by Stéphane
    I have a fresh install of 11.04 64bit, with 2 administrator accounts and 4 normal accounts. The 4 normal accounts (the kids' accounts) don't have passwords, they can login simply by clicking on their names. When any of the users -- either admin or normal -- tries to switch to another account by clicking in the top-right corner of the screen and selecting another user, the screen goes black and the entire system locks up. Even CTRL+ALT+F1 through F7 does nothing. This is reproducible 100% of the time on this system. I can ssh into the box when the console locks up, and by running top, I see that Xorg is consuming about 100% of the CPU. Looking at the output of "ps axfu" in bash while the system is in this "locked up" state, here is the lightdm and X process tree: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1153 0.0 0.1 183508 4292 ? Ssl Dec26 0:00 lightdm root 2187 0.4 4.6 265976 164168 tty7 Ss+ 00:43 0:21 \_ /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch stephane 2612 0.0 0.3 266400 10736 ? Ssl 01:52 0:00 \_ /usr/bin/gnome-session --session=ubuntu stephane 2650 0.0 0.0 12264 276 ? Ss 01:52 0:00 | \_ /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/gnome-session --session=ubuntu stephane 2703 0.8 3.0 562068 106548 ? Sl 01:52 0:08 | \_ compiz stephane 2801 0.0 0.0 4264 584 ? Ss 01:52 0:00 | | \_ /bin/sh -c /usr/bin/compiz-decorator stephane 2802 0.0 0.3 265744 13772 ? Sl 01:52 0:00 | | \_ /usr/bin/unity-window-decorator ...cut... root 3024 80.6 0.3 107928 13088 tty8 Rs+ 01:53 12:34 \_ /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch That last process, pid #3024 in this case, is what has the CPU pegged. In case it matters (I suspect it might) here is what I think may be the relevant information for my video card, taken from /var/log/Xorg.0.log: [ 3392.653] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so [ 3392.653] (II) Module glx: vendor="FireGL - AMD Technologies Inc." [ 3392.653] compiled for 6.9.0, module version = 1.0.0 ... [ 3392.655] (II) LoadModule: "fglrx" [ 3392.655] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so [ 3392.672] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 3392.672] compiled for 1.4.99.906, module version = 8.88.7 [ 3392.672] Module class: X.Org Video Driver ... [ 3392.759] (==) fglrx(0): ATI 2D Acceleration Architecture enabled [ 3392.759] (--) fglrx(0): Chipset: "AMD Radeon HD 6410D" (Chipset = 0x9644) Lastly: I did see this posting: Change user on 11.10 hangs system ...but I checked, and the libpam-smbpass package isn't installed on this system.

    Read the article

  • Brocken package manager due to incorrect Banshee package

    - by user54974
    Sup, so, I'm not familiar with linux at all so help is much appreciated. I've been trying to boot my pc up from a live CD unsuccessfully. I get to the stage at which there are the options to test without installing or install or so on where I select 'Install Ubuntu.' Here it relays through some fast DOS commands until it reaches 'end trace' and then, eventually, 'Killed.' I have already got a functional 11.10 version installed, could this be a problem? The reason I am attempting a reinstall is because the package system is damaged inside 11.10, a problem I can't seem to solve. If I try to install any new software from within the software centre it tells me that two banshee extensions must be removed. I try to remove these from inside the terminal, using apt-get remove, which results in:** You might want to run apt-get -f install to correct these: The following packages have unmet dependencies. banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). The software centre suggests that I disable all third party repositories and run apt-get install -f I have done so but the package system remains damaged and apt-get install -fattempts to install banshee 2.2.1 but returns: Errors were encountered while processing: /var/cache/apt/archives/banshee_2.2.1-1ubuntu3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried apt-get update (runs fine) and apt-get upgrade. The upgrade command apt-get upgrade results in: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. banshee-extension-soundmenu : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed banshee-extension-ubuntuonemusicstore : Depends: banshee (>= 2.2.1) but 2.2.0-1ubuntu2 is installed E: Unmet dependencies. Try using -f. I seem to be going round and round in circles here! If only I could reinstall successfully. Only proposed updates (oneiric proposed) is not enabled.

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

  • Accessing your web server via IPv6

    Being able to run your systems on IPv6, have automatic address assignment and the ability to resolve host names are the necessary building blocks in your IPv6 network infrastructure. Now, that everything is in place it is about time that we are going to enable another service to respond to IPv6 requests. The following article will guide through the steps on how to enable Apache2 httpd to listen and respond to incoming IPv6 requests. This is the fourth article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Surfing the web - IPv6 style Enabling IPv6 connections in Apache 2 is fairly simply. But first let's check whether your system has a running instance of Apache2 or not. You can check this like so: $ service apache2 status Apache2 is running (pid 2680). In case that you got a 'service unknown' you have to install Apache to proceed with the following steps: $ sudo apt-get install apache2 Out of the box, Apache binds to all your available network interfaces and listens to TCP port 80. To check this, run the following command: $ sudo netstat -lnptu | grep "apache2\W*$"tcp6       0      0 :::80                   :::*                    LISTEN      28306/apache2 In this case Apache2 is already binding to IPv6 (and implicitly to IPv4). If you only got a tcp output, then your HTTPd is not yet IPv6 enabled. Check your Listen directive, depending on your system this might be in a different location than the default in Ubuntu. $ sudo nano /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default# This is also true if you have upgraded from before 2.2.9-3 (i.e. from# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and# README.Debian.gzNameVirtualHost *:80Listen 80<IfModule mod_ssl.c>    # If you add NameVirtualHost *:443 here, you will also have to change    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl    # to <VirtualHost *:443>    # Server Name Indication for SSL named virtual hosts is currently not    # supported by MSIE on Windows XP.    Listen 443</IfModule><IfModule mod_gnutls.c>    Listen 443</IfModule> Just in case that you don't have a ports.conf file, look for it like so: $ cd /etc/apache2/$ fgrep -r -i 'listen' ./* And modify the related file instead of the ports.conf. Which most probably might be either apache2.conf or httpd.conf anyways. Okay, please bear in mind that Apache can only bind once on the same interface and port. So, eventually, you might be interested to add another port which explicitly listens to IPv6 only. In that case, you would add the following in your configuration file: Listen 80Listen [2001:db8:bad:a55::2]:8080 But this is completely optional... Anyways, just to complete all steps, you save the file, and then check the syntax like so: $ sudo apache2ctl configtestSyntax OK Ok, now let's apply the modifications to our running Apache2 instances: $ sudo service apache2 reload * Reloading web server config apache2   ...done. $ sudo netstat -lnptu | grep "apache2\W*$"                                                                                               tcp6       0      0 2001:db8:bad:a55:::8080 :::*                    LISTEN      5922/apache2    tcp6       0      0 :::80                   :::*                    LISTEN      5922/apache2 There we have two daemons running and listening to different TCP ports. Now, that the basics are in place, it's time to prepare any website to respond to incoming requests on the IPv6 address. Open up any configuration file you have below your sites-enabled folder. $ ls -al /etc/apache2/sites-enabled/... $ sudo nano /etc/apache2/sites-enabled/000-default <VirtualHost *:80 [2001:db8:bad:a55::2]:8080>        ServerAdmin [email protected]        ServerName server.ios.mu        ServerAlias server Here, we have to check and modify the VirtualHost directive and enable it to respond to the IPv6 address and port our web server is listening to. Save your changes, run the configuration test and reload Apache2 in order to apply your modifications. After successful steps you can launch your favourite browser and navigate to your IPv6 enabled web server. Accessing an IPv6 address in the browser That looks like a successful surgery to me... Note: In case that you received a timeout, check whether your client is operating on IPv6, too.

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Exalytics OBI11g Partner Training 3-day hands-on Workshops

    - by Mike.Hallett(at)Oracle-BI&EPM
    These FREE to OPN Partners hands-on workshops highlight both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle's proven Business Intelligence Foundation (OBI 11g v 11.1.1.6 and Essbase) with enhanced visualization capabilities and performance optimizations. Priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam, and to Partners who have purchased an Exalytics for their own data centres to demonstrate it to their clients. Topics covered will include: Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution.You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics. Prerequisites Experience and understanding of OBIEE 11g is required ·       Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11g Introduction Workshop is highly recommended, and priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam. Good understanding of data warehousing and data modelling for reporting and analysis purpose.  Strong experience with database technologies preferred Attendee to provide their own laptops which must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free disk space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop.  Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Register Here for 3-day Workshops: 11-Dec-12 Birmingham UK 29-Jan-13 Utrecht NL 12-Feb-13 Frankfurt Germany 12-Mar-13 Moscow Russia

    Read the article

  • Can Anything be Done to Make Improv (a 1993 Win 3.1 App) handle larger Files?

    - by user75185
    My very favorite spradsheet is Improv, a 1993 Windows 3.1 application. It still puts Excel to shame for building spreadsheets and writing formulas. The only problem is because Improv was written when 1 Meg of RAM was state of the art, it becomes unstable when working with larger spreadsheets and often crashes and/or corrupts the data file. I am working on a project that greatly exceeds Improv's limits. Although it will ultimately require more robust databasing capability, I could save a lot of critical time if I could delay that headache and continue working in Improv for now. To that end, I moved to the only product I could find that comes close, Quantrix, which is nothing more than Improv updated to handle large spreadsheets and utilize today's technologies. The problems with Quantrix are its speed (significantly slower than Improv) and its $1000 price (which I cannot afford). I have already had 3 15 day extensions after the initial 30 day trial, so my time to use Quantrix as a bridge is at its end. Searches for Improv over the years have gotten me nowhere and, not surprisingly after reading some posts on this site, I got nothing for the money and time invested to find a programmer to write code to "fix" this problem. Improv is freely available as "abandonware" at http://vetusware.com/download/LotusImprov2.1/?id=5797 , and the best background info can be found on Wikipedia and at "Moose's Greatest Software Products of All Time - Lotus Improv" http://moosevalley.fhost.com.au/mooses_review_page_lotus_improv.html It is critically urgent for me to focus on analyzing the data asap. Working in a stable Improv would, without question, be the fastest route. To that end, I am looking for answers to the following questions and anything else that might be helpful: 1) Is it lawful to hire someone to fix Improv for my own use? If so, 2) About how much should it cost? 3) About how long should it take? 4) What skills should I be looking for &/or how should a post be worded? 5) Is there a niche site where it should it be posted? 6) What questions can I ask to quickly screen candidates? Since I am not a programmer, I need questions the answers to which leave no room to confuse me, whether intentional or not. For example, what tools or players should someone with an acceptable competency level have knowledge of?

    Read the article

  • MOSSt 2010 Hosting :: Dialog Platform in SharePoint 2010 & How to Open the Edit Form Dialog for List Item

    - by mbridge
    One of the New User Interface Platforms in SharePoint 2010 is ‘The Dialog Platform’ A dialog is essentially a <div> which gets visible on demand and renders the HTML using a background overlay creating a modal dialog like user experience. We can show an existing div from within the page or a different page using a URL inside the dialogs. When we pass the URL to the dialog it looks for the Querystring parameter “IsDlg=1”. If this parameters exists than it would dynamically load the "/_layouts/styles/dlgframe.css” file. This file overrides the “s4-notdlg” class items as “display:none”, which means that all items with this class would not get displayed in Dialog Mode.  So if we go to the v4.master page we can see that this class is used by the Ribbon control to hide the ribbon when in dialog mode: How to open the Edit Form Dialog for List Item: In SharePoint 2010 The URL for opening the Edit Form of any list item looks like something like this : http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=1&IsDlg=1 ID is the list item row identifier and as discussed above the IsDlg is for the dialog mode. Now to open a dialog we need to use the SP.UI.ModalDialog.showModalDialog method from the ECMAScript Client Object model and pass in the url of the page, width & height of the dialog and also a callback function in case we want some code to run after the dialog is closed. <script type="text/javascript">          //Handle the DialogCallback callback               function DialogCallback(dialogResult, returnValue){               }             //Open the Dialog           function OpenEditDialog(id){             var options = { url:&quot;http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=&quot; + id + &quot;&amp;IsDlg=1&quot;,              width: 700,              height: 700,              dialogReturnValueCallback: DialogCallback              };             SP.UI.ModalDialog.showModalDialog(options);           } </script> The .js files for the ECMAScript Object Model (SP.js, SP.Core.js, SP.Ribbon.js, and SP.Runtime.js ) are installed in the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\LAYOUTS directory. Here is a good MSDN link explaining the Client Object Model Distribution and Deployment options available in SharePoint 2010 and this is the lowest costSharePoint 2010 Provider.

    Read the article

  • SQL Server Memory Manager Changes in Denali

    - by SQLOS Team
    The next version of SQL Server will contain significant changes to the memory manager component.  The memory manager component has been rewritten for Denali.  In the previous versions of SQL Server there were two distinct memory managers.  There was one memory manager which handled allocation sizes of 8k or less and another for greater than 8k.  For Denali there will be one memory manager for all allocation sizes.   The majority of the changes will be transparent to the end user.  However, some changes will be visible to the user.  These are listed below: ·         The ‘max server memory’ configuration option has new lower limits.  Specifically, 32-bit versions of SQL Server will have a lower limit of 64 MB.  The 64-bit versions will have a lower limit of 128 MB. ·         All memory allocations by SQL Server components will observe the ‘max server memory’ configuration option.  In previous SQL versions only the 8k allocations were limited the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained. ·         DMVs which refer to memory manager internals have been modified.  This includes adding or removing columns and changing column names. ·         The memory manager configuration messages in the error log have minor changes. ·         DBCC memorystatus output has been changed. ·         Address Windowing Extensions (AWE) has been deprecated.   In the next blog post I will discuss the changes to the memory manager DMVs in greater detail.  In future blog posts I will discuss the other changes in greater detail.  

    Read the article

  • ??????WLST

    - by Masa Sasaki
    WebLogic Server?????????????WebLogic Server????????7?23?????????38?WebLogic Server???@??????????????WLST?(?????? ??????????? ?? ??)??????????????WLST(WebLogic Scripting Tool)?WebLogic Server???????????????????????????????????WLST?????TIPS??????????????????????WLST????????????????????????????????????????????·????????????(?????? Fusion Middleware?????? ??? ??) WLST?? WebLogic Scripting Tool???WebLogic ??????????????????????????·?????????? WebLogic Server 9.x???????????????????Java??????·?????????Jython??????????? WLST???????????????????????????Jython??????WebLogic Server??????????????(????)????????WLST?Jython??????????????????????????? WLST???????? WLST?????????????????????????WebLogic????????????????????????????????·????????????????·??????(???????????)???????? (WLST????????????????????????????????????????????) WLST??????Java Management Extensions (JMX)???????????????JMX??????????????????????????????????Bean (MBean)??????????????????????? WLST???????? WLST??????WebLogic Server?????????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????WLST??????WebLogic Server??????????JMX????????????????????????????????WebLogic???????????????WLST????????????????????? ???????????·???????????? ????? ???????????·???·??????WLST?????????????????????????????????????????????????????????????????????????????????????????????????????WLST??????????????????????????????????????????? ?????·??? ???.py????????·????(?????filename.py)?WLST?????????????????????????????WLST???????????????????????????·?????Jython??????????????????????? ?????? ?????????Java?????WLST??????????????????????WLST????????????????????????????????·????????WLST???????????????????????????? WLST?MBean WebLogic Server?????(?????????)?JMX(Java Management Extension)?????????????JMX???????????????Bean(MBean)?????? MBean?????? MBean????????????????????????????????MBean???????????????????????????????????????????????????????????????????? WLST????????? ? ?????? ? ???????????(MBean???)??? ? ?????????? - ??????? - ?????????? - ?? - ?????? WLST???? ??·??·?????WLST???????????????????????????????????????????WLST??????????????? TIPS?MBean???? TIPS?????????????????WLST????????????????????????????????????????????(SSL)?????????????????????????????????SSL??????????????????????????????????????????????MBean????????WLST ls?????WLST find?????JRockit Mission Control?config.xml???????????????????WebLogic Server MBean Reference????????????????? ??? WLST??????????????????????????????????????????????????????????????????ThreadPoolRuntimeMBean??????JMS?????????WLST??????????????? ??????????????????????????? WLST ????????? $WL_HOME/samples/server/examples/src/examples/wlst/online WLST????????????? $WL_HOME/common/templates/scripts/wlst ???????????????????????????????????????????????????????????????????WLST???????????????????·??????????WLST????????????????????? ?????? WebLogic Server??? WebLogic Server?????????WebLogic Server?????! WebLogic Server??????(???????????) WebLogic Server???????? WebLogic Server??????

    Read the article

  • Windows 2012/IIS 8 + ASP.NET MVC Applicaiton 403.14 (Forbidden) - The Web server is configured to not list the contents

    - by WiredPrairie
    I have a very simple MVC 4 application I'm trying to deploy to a Windows 2012 server. Inconsistently, when navigating to the root of the web application (http://localhost/app), it returns a 403.14-Forbidden: Detailed Error Information: Module: DirectoryListingModule Notification: ExecuteRequestHandler Handler: StaticFile Error Code: 0x00000000 Requested URL: http://localhost:80/test1/ Physical Path: c:\apps\test1\ Logon Method: Negotiate The web application is: Is a very vanilla VS2012 MVC4 Intranet template -- with only a tweak to a label to prove things were working. runs in an Integrated v4.0 application pool setup to use Windows authentication application pool has a custom AD Identity assigned (so it can gain access to a SQL server) application pool identity has read permissions in the c:\apps\test1 folder in which it is running It's an MVC4 application, targeting .NET 4.0 currently -There's no default document in an MVC4 application (like a default.aspx), as there shouldn't need to be one. I don't want to enable directory listings (as that's not the real error). Installed: Roles / Web Server (IIS) / Appliation Development / (.NET 4.5 Extensibility, Application Initialization, ASP.NET 4.5, ISAP Extensions, ISAPI Filters, WebSocket Protocol) Works locally on my machine in IISExpress on Windows 8 Has configured in web.config: <modules runAllManagedModulesForAllRequests="true" /> is set to precompiled during publish When I change the precompiled option to false, the web application does not fail (in my testing at least, it seems to work consistently). The reason I say it's inconsistent is that I've seen it work, then I've published, and the error returns. I can't find a pattern to the issue (and right now, I haven't been able to get it work again, at all). The 403 is returned from a local or remote web browser. I've had trouble finding a solution that isn't intended for older versions of Windows (like suggestions to reinstall ASP.NET which won't work on Windows 2012). I really don't know what else to try.

    Read the article

  • ARR troubleshooting 502.3 / WinHttp tracing on Server 2012

    - by nachojammers
    I have the following scenario: 3 windows server 2012 virtual servers, all with IIS 8: 1 server with Application Request Routing 3 2 servers with the web applications that the ARR server routes to I am getting intermittent 502 3 12002 errors. Following this guide http://www.iis.net/learn/extensions/troubleshooting-application-request-routing/troubleshooting-502-errors-in-arr I have identified that I need to trace using netsh the WinHttp/WebIO providers to get to the real error code that is mapped to the 12002 error code. I run the trace as the article suggests: netsh trace start scenario=internetclient capture=yes persistent=no level=verbose tracefile=c:\temp\net.etl When analysing the output of the netsh traces, I don't get the level of information that the article suggests I should. Specifically I only get the following types of entry in the trace viewed using netmon: WINHTTP_MicrosoftWindowsWinHttp:Stopping WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Starting WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Queue Overlapped IO Thread Action... I certainly don't get anything detailed enough that would help me understand why am getting any timeouts. Is there any reason why Server 2012 wouldn't trace the WinHttp API to the level I need? Thanks

    Read the article

  • Using unixODBC to connect to Oracle server

    - by Paul
    I am trying to configure our web server (RHEL 5.4 x86) to connect to an Oracle database using unixODBC. I have installed unixODBC-2.2.11-7.1.1, which yum tells me is the latest version. I have also installed the Oracle InstantClient 11.2 and the Oracle InstantClient ODBC library. I have symlinked the all the .so files in /usr/lib/oracle/11.2/client/lib to /usr/lib. I have set $LD_LIBRARY_PATH to /usr/lib/, $ORACLE_HOME to /usr/lib/oracle and $TNS_ADMIN to the directory containing my (valid) Tnsnames.ora file. Here are the contents of my /etc/odbcinst.ini file: [Oracle] Description = Oracle ODBC Connection Driver = /usr/lib/libsqora.so.11.1 Setup = FileUsage = and my /etc/odbc.ini file: [Oracle] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 64000 ForceWCHAR = F Lobs = T Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //<host>:<port>/<db> SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = (ServerName has been edited...host, port, and db are actually there, and correct) When I run isql I get $ isql -v Oracle isql: symbol lookup error: /usr/lib/libsqora.so.11.1: undefined symbol: SQLGetPrivateProfileStringW And running dltest gives me $ dltest Oracle SQLConnect [dltest] ERROR dlopen: Oracle: cannot open shared object file: No such file or directory If anyone has any insights I would be grateful, I've been trying to get this to connect for about 5 hours now... I am going home for the night, but will gladly provide more details, if necessary, tomorrow morning, to anyone willing to help...

    Read the article

  • Moving from ColdFusion 8 to ColdFusion 10 - Migration Fails

    - by XenoFoxx
    After having made several attempts to migrate from a ColdFusion 8 Standard server to a ColdFusion 10 Standard server, it feels like I am "almost" there. I'm using the 64 bit installer from Adobe's website. I'm using a Windows Server 2008 (64 bit) server with IIS 7.0. The installation itself goes smooth and the services start and are running. But at the end of the installation it says "ColdFusion Installed, but with errors" and it generates a log file. The log file reads: Migration Error: : Check that "C:\ColdFusion8" is a valid directory and is an installation of either ColdFusion MX 6 or ColdFusionMX 7 and further down says: Status: WARNING Additional Notes: WARNING - Could not migrate settings from previous version of ColdFusion Custom Action: com.macromedia.ia.action.MigrateColdFusionAction Status: ERROR Additional Notes: ERROR - class com.macromedia.ia.action.MigrateColdFusionAction NonfatalInstallException null The applicationHost.config file has new XML referencing the ColdFusion 10 directory, but IIS is still using ColdFusion 8. I'm also going to guess that the settings in the CF Administrator have not been migrated based on the message in the log above. I've followed the instructions on Adobe's site, including ensuring that ASP.NET, CGI, ISAPI Extensions, and ISAPI Filters are all enabled. I've also enabled IIS 6 Metabase Compatibility even though I don't think it's needed. Has anyone else had similar issues with ColdFusion 10 and IIS 7. Currently I have uninstalled CF 10 and reverted back to

    Read the article

  • Why isn't the scripts in my autoload folder being executed in Vim?

    - by Codemonkey
    I'm trying to use Pathogen to manage my Vim extensions. My bundle folder looks like this: .../bundle/ +-- vim-pathogen ¦   +-- autoload ¦   +-- pathogen.vim +-- vim-smoothscroll +-- autoload +-- smooth_scroll.vim And my vimrc file includes this: let s:root = fnamemodify(resolve(expand(":p")), ":h") " Initiate pathogen. exec "source " . s:root . "/vimfiles/bundle/vim-pathogen/autoload/pathogen.vim" exec pathogen#infect() My vimrc file is a symlink located in ~ but pointing to a folder inside my Dropbox folder. This appears to work when I start Vim. Pathogen has added vim-smoothscroll to my runtimepath: :set runtimepath? runtimepath=~/Dropbox/Personal/config_sync/vim/vimfiles,~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-p athogen,~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-smoothscroll,~/.vim,~/vim/share/vim/vimfiles,~/vim/ share/vim/vim74,~/vim/share/vim/vimfiles/after,~/.vim/after The problem is that the script smooth_scroll.vim hasn't been loaded: 1: ~/.vimrc 2: ~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-pathogen/autoload/pathogen.vim 3: ~/vim/share/vim/vim74/syntax/syntax.vim 4: ~/vim/share/vim/vim74/syntax/synload.vim 5: ~/vim/share/vim/vim74/syntax/syncolor.vim 6: ~/vim/share/vim/vim74/filetype.vim 7: ~/vim/share/vim/vim74/menu.vim 8: ~/vim/share/vim/vim74/autoload/paste.vim 9: ~/Dropbox/Personal/config_sync/vim/vimfiles/colors/codeschool.vim 10: ~/Dropbox/Personal/config_sync/vim/_vimrc_gui 11: ~/Dropbox/Personal/config_sync/vim/_vimrc_keybinds 12: ~/vim/share/vim/vim74/plugin/getscriptPlugin.vim 13: ~/vim/share/vim/vim74/plugin/gzip.vim 14: ~/vim/share/vim/vim74/plugin/matchparen.vim 15: ~/vim/share/vim/vim74/plugin/netrwPlugin.vim 16: ~/vim/share/vim/vim74/plugin/rrhelper.vim 17: ~/vim/share/vim/vim74/plugin/spellfile.vim 18: ~/vim/share/vim/vim74/plugin/tarPlugin.vim 19: ~/vim/share/vim/vim74/plugin/tohtml.vim 20: ~/vim/share/vim/vim74/plugin/vimballPlugin.vim 21: ~/vim/share/vim/vim74/plugin/zipPlugin.vim 22: ~/vim/share/vim/vim74/syntax/ruby.vim 23: ~/vim/share/vim/vim74/syntax/vim.vim 24: ~/vim/share/vim/vim74/syntax/python.vim Why is that? Loading the script manually works fine.

    Read the article

  • Debian Apache2 and SSL

    - by Topher Fangio
    Hello all, I recently took over a server that is using Apache2 with SSL. I have setup a new server to which I am migrating all of the old websites so that we can more easily scale (it's a cloud server) and so that I can set everything up correctly (or at least with some sort of convention). I have read quite a few articles on setting up Apache2 and SSL with virtual hosts, but I'm a bit confused because all of the examples show three files and I only seem to have two. To compound the problem, they are all named differently (do the file extensions actually make a difference?). The examples show something to this effect: <VirtualHost X.X.X.X:443> ServerAlias something.mydomain.com ServerAdmin [email protected] DocumentRoot /var/www/project/client/site SSLEngine on SSLCertificateFile /etc/ssl/certs/mydomain-cert.pem SSLCertificateKeyFile /etc/ssl/private/mydomain-key.pem SSLCertificateChainFile /etc/ssl/certs/mydomain-ca.crt </VirtualHost> However, the files I have are: _.mydomain.com.crt gd_bundle.crt It is a wildcard certificate that we purchased through GoDaddy I believe. I believe that the first file is the actual certificate file and the gd_bundle.crt is the chain file, but that leaves me without a key file. There is also a random mydomain.csr file lying around on the old server, but it wasn't one of the files bundled with the download from GoDaddy, so I'm not really sure as to what it is. Any help in figuring out what I need to do would be greatly appreciated. I am software developer, so I know my way around computers, but I have only dabbled in server setup/maintenance. Much Thanks!

    Read the article

  • FTP timing out after login

    - by Imran
    For some reasons I cant access any of my accounts on my dedicated server via FTP. It simply times out when it tried to display the directories. Heres a log from FileZila... Status: Resolving address of testdomain.com Status: Connecting to 64.237.58.43:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [TLS] ---------- Response: 220-You are user number 3 of 50 allowed. Response: 220-Local time is now 19:39. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER testaccount Response: 331 User testaccount OK. Password required Command: PASS ******** Response: 230-User testaccount has group access to: testaccount Response: 230 OK. Current restricted directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type*;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTP Response: PASV Response: EPSV Response: SPSV Response: ESTA Response: AUTH TLS Response: PBSZ Response: PROT Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (64,237,58,43,145,153) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 18 matches total Error: Connection timed out Error: Failed to retrieve directory listing I have restarted the FTP service serveral times but still It doesnt loads. I only have this problem when my server is reaching it peak usage which is still only 1.0 (4 cores), 40% of 4GB ram. The ftp connections isnt maxed out because only me and my colleague have access to FTP on the server.

    Read the article

  • .htaccess Permission denied. Unable to check htaccess file

    - by Josh
    I have a strange problem when adding a sub-domain to our virtual server. I have done similar sub-domains before and they have worked fine. When I try to access the sub-domain I get an 403 Forbidden error. I checked the error logs and have the following error: pcfg_openfile: unable to check htaccess file, ensure it is readable I've searched Google and could only find solutions regarding file and folder permissions, that I have checked and the solution isn't solved. I also saw problems with Frontpage Extensions, but that's not installed on the server. Edit Forgot to say that there isn't a .htaccess file in the directory of the sub-domain Edit #2 Still not been able to find a solution on this. Only things I have been able to find out is: It doesn't seem to be a problem with any .htaccess files (I've tried creating blank ones, with correct user privileges). It doesn't seem to be a problem with any folder permissions as they are all set correct. There isn't a problem with the way the sub-domain has been set up, as I've tried pointing the DocumentRoot to another folder and it worked fine. I've also done sub-domains fine before with no problem. Edit #3 Find out more information. I don't think it can be a file permission problem now, because if I access it by going to the server ip and then the directory where the site is hosted it all works fine (minus the stylesheets & images, which is just down to how they are linked)

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >