Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 392/659 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Can I alias all directory requests to a single file in nginx?

    - by user749618
    I'm trying to figure out how to take all requests made to a particular directory and return a json string without a redirect, in nginx. Example: curl -i http://example.com/api/call1/ Expected result: HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/json Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Here's what I have so far in my nginx conf: location ~ ^/api/(.*)$ { index /api_logout.json; alias /path/to/file/api_logout.json; types { } default_type "application/json; charset=utf-8"; break; } However, when I try to make the request the Content-Type doesn't stick: $ curl -i http://example.com/api/call1/ HTTP/1.1 200 OK Accept-Ranges: bytes Content-Type: application/octet-stream Date: Fri, 13 Apr 2012 23:48:21 GMT Last-Modified: Fri, 13 Apr 2012 22:58:56 GMT Server: nginx X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 38 Connection: keep-alive {"logout": true} Is there a better way to do this? How can I get the application/json type to stick?

    Read the article

  • Adding an Admin user to an ASP.NET MVC 4 application using a single drop-in file

    - by Jon Galloway
    I'm working on an ASP.NET MVC 4 tutorial and wanted to set it up so just dropping a file in App_Start would create a user named "Owner" and assign them to the "Administrator" role (more explanation at the end if you're interested). There are reasons why this wouldn't fit into most application scenarios: It's not efficient, as it checks for (and creates, if necessary) the user every time the app starts up The username, password, and role name are hardcoded in the app (although they could be pulled from config) Automatically creating an administrative account in code (without user interaction) could lead to obvious security issues if the user isn't informed However, with some modifications it might be more broadly useful - e.g. creating a test user with limited privileges, ensuring a required account isn't accidentally deleted, or - as in my case - setting up an account for demonstration or tutorial purposes. Challenge #1: Running on startup without requiring the user to install or configure anything I wanted to see if this could be done just by having the user drop a file into the App_Start folder and go. No copying code into Global.asax.cs, no installing addition NuGet packages, etc. That may not be the best approach - perhaps a NuGet package with a dependency on WebActivator would be better - but I wanted to see if this was possible and see if it offered the best experience. Fortunately ASP.NET 4 and later provide a PreApplicationStartMethod attribute which allows you to register a method which will run when the application starts up. You drop this attribute in your application and give it two parameters: a method name and the type that contains it. I created a static class named PreApplicationTasks with a static method named, then dropped this attribute in it: [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] That's it. One small gotcha: the namespace can be a problem with assembly attributes. I decided my class didn't need a namespace. Challenge #2: Only one PreApplicationStartMethod per assembly In .NET 4, the PreApplicationStartMethod is marked as AllMultiple=false, so you can only have one PreApplicationStartMethod per assembly. This was fixed in .NET 4.5, as noted by Jon Skeet, so you can have as many PreApplicationStartMethods as you want (allowing you to keep your users waiting for the application to start indefinitely!). The WebActivator NuGet package solves the multiple instance problem if you're in .NET 4 - it registers as a PreApplicationStartMethod, then calls any methods you've indicated using [assembly: WebActivator.PreApplicationStartMethod(type, method)]. David Ebbo blogged about that here:  Light up your NuGets with startup code and WebActivator. In my scenario (bootstrapping a beginner level tutorial) I decided not to worry about this and stick with PreApplicationStartMethod. Challenge #3: PreApplicationStartMethod kicks in before configuration has been read This is by design, as Phil explains. It allows you to make changes that need to happen very early in the pipeline, well before Application_Start. That's fine in some cases, but it caused me problems when trying to add users, since the Membership Provider configuration hadn't yet been read - I got an exception stating that "Default Membership Provider could not be found." The solution here is to run code that requires configuration in a PostApplicationStart method. But how to do that? Challenge #4: Getting PostApplicationStartMethod without requiring WebActivator The WebActivator NuGet package, among other things, provides a PostApplicationStartMethod attribute. That's generally how I'd recommend running code that needs to happen after Application_Start: [assembly: WebActivator.PostApplicationStartMethod(typeof(TestLibrary.MyStartupCode), "CallMeAfterAppStart")] This works well, but I wanted to see if this would be possible without WebActivator. Hmm. Well, wait a minute - WebActivator works in .NET 4, so clearly it's registering and calling PostApplicationStartup tasks somehow. Off to the source code! Sure enough, there's even a handy comment in ActivationManager.cs which shows where PostApplicationStartup tasks are being registered: public static void Run() { if (!_hasInited) { RunPreStartMethods(); // Register our module to handle any Post Start methods. But outside of ASP.NET, just run them now if (HostingEnvironment.IsHosted) { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility.RegisterModule(typeof(StartMethodCallingModule)); } else { RunPostStartMethods(); } _hasInited = true; } } Excellent. Hey, that DynamicModuleUtility seems familiar... Sure enough, K. Scott Allen mentioned it on his blog last year. This is really slick - a PreApplicationStartMethod can register a new HttpModule in code. Modules are run right after application startup, so that's a perfect time to do any startup stuff that requires configuration to be read. As K. Scott says, it's this easy: using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; [assembly:PreApplicationStartMethod(typeof(MyAppStart), "Start")] public class CoolModule : IHttpModule { // implementation not important // imagine something cool here } public static class MyAppStart { public static void Start() { DynamicModuleUtility.RegisterModule(typeof(CoolModule)); } } Challenge #5: Cooperating with SimpleMembership The ASP.NET MVC Internet template includes SimpleMembership. SimpleMembership is a big improvement over traditional ASP.NET Membership. For one thing, rather than forcing a database schema, it can work with your database schema. In the MVC 4 Internet template case, it uses Entity Framework Code First to define the user model. SimpleMembership bootstrap includes a call to InitializeDatabaseConnection, and I want to play nice with that. There's a new [InitializeSimpleMembership] attribute on the AccountController, which calls \Filters\InitializeSimpleMembershipAttribute.cs::OnActionExecuting(). That comment in that method that says "Ensure ASP.NET Simple Membership is initialized only once per app start" which sounds like good advice. I figured the best thing would be to call that directly: new Mvc4SampleApplication.Filters.InitializeSimpleMembershipAttribute().OnActionExecuting(null); I'm not 100% happy with this - in fact, it's my least favorite part of this solution. There are two problems - first, directly calling a method on a filter, while legal, seems odd. Worse, though, the Filter lives in the application's namespace, which means that this code no longer works well as a generic drop-in. The simplest workaround would be to duplicate the relevant SimpleMembership initialization code into my startup code, but I'd rather not. I'm interested in your suggestions here. Challenge #6: Module Init methods are called more than once When debugging, I noticed (and remembered) that the Init method may be called more than once per page request - it's run once per instance in the app pool, and an individual page request can cause multiple resource requests to the server. While SimpleMembership does have internal checks to prevent duplicate user or role entries, I'd rather not cause or handle those exceptions. So here's the standard single-use lock in the Module's init method: void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { //Do stuff } initialized = true; } } Putting it all together With all of that out of the way, here's the code I came up with: using Mvc4SampleApplication.Filters; using System.Web; using System.Web.Security; using WebMatrix.WebData; [assembly: PreApplicationStartMethod(typeof(PreApplicationTasks), "Initializer")] public static class PreApplicationTasks { public static void Initializer() { Microsoft.Web.Infrastructure.DynamicModuleHelper.DynamicModuleUtility .RegisterModule(typeof(UserInitializationModule)); } } public class UserInitializationModule : IHttpModule { private static bool initialized; private static object lockObject = new object(); private const string _username = "Owner"; private const string _password = "p@ssword123"; private const string _role = "Administrator"; void IHttpModule.Init(HttpApplication context) { lock (lockObject) { if (!initialized) { new InitializeSimpleMembershipAttribute().OnActionExecuting(null); if (!WebSecurity.UserExists(_username)) WebSecurity.CreateUserAndAccount(_username, _password); if (!Roles.RoleExists(_role)) Roles.CreateRole(_role); if (!Roles.IsUserInRole(_username, _role)) Roles.AddUserToRole(_username, _role); } initialized = true; } } void IHttpModule.Dispose() { } } The Verdict: Is this a good thing? Maybe. I think you'll agree that the journey was undoubtedly worthwhile, as it took us through some of the finer points of hooking into application startup, integrating with membership, and understanding why the WebActivator NuGet package is so useful Will I use this in the tutorial? I'm leaning towards no - I think a NuGet package with a dependency on WebActivator might work better: It's a little more clear what's going on Installing a NuGet package might be a little less error prone than copying a file A novice user could uninstall the package when complete It's a good introduction to NuGet, which is a good thing for beginners to see This code either requires either duplicating a little code from that filter or modifying the file to use the namespace Honestly I'm undecided at this point, but I'm glad that I can weigh the options. If you're interested: Why are you doing this? I'm updating the MVC Music Store tutorial to ASP.NET MVC 4, taking advantage of a lot of new ASP.NET MVC 4 features and trying to simplify areas that are giving people trouble. One change that addresses both needs us using the new OAuth support for membership as much as possible - it's a great new feature from an application perspective, and we get a fair amount of beginners struggling with setting up membership on a variety of database and development setups, which is a distraction from the focus of the tutorial - learning ASP.NET MVC. Side note: Thanks to some great help from Rick Anderson, we had a draft of the tutorial that was looking pretty good earlier this summer, but there were enough changes in ASP.NET MVC 4 all the way up to RTM that there's still some work to be done. It's high priority and should be out very soon. The one issue I ran into with OAuth is that we still need an Administrative user who can edit the store's inventory. I thought about a number of solutions for that - making the first user to register the admin, or the first user to use the username "Administrator" is assigned to the Administrator role - but they both ended up requiring extra code; also, I worried that people would use that code without understanding it or thinking about whether it was a good fit.

    Read the article

  • Difference between "Redirect permanent" vs. mod_rewrite

    - by Stefan Lasiewski
    This is an Apache httpd 2.2 server. We require that access to this webserver be encrypted by HTTPS. When web clients visit my site at http://www.example.org/$foo (port 80), I want to redirect their request to the HTTPS encrypted website at https://www.example.org/$foo . There seem to be two common ways to do this: First method uses the 'Redirect' directive from mod_alias: <VirtualHost *:80> Redirect permanent / https://www.example.org/ </VirtualHost> Second method uses mod_rewrite: <VirtualHost *:80> RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </VirtualHost> What is the difference between a "Redirect permanent" and the mod_rewrite stanza. Is one better then the other?

    Read the article

  • IIS - Forwarding requests to a folder to another port

    - by user1231958
    Context I currently installed Glassfish 3 in a server that currently holds ASP and PHP inside Internet Information Server 7 so we can start moving to a new system architecture (the information system is being remade). Obviously, Glassfish uses another port and without too much configuration (all I had to do is to install it) it worked. If I write www.domain.com:8080, the person will be redirected to the Glassfish server. Issue Obviously I don't want the person to write the port! I also believe it might also hold some security issues. Requirement I need the server to take an address of the form www.domain.com/gf or new.domain.com or something alike, and when it receives such a request, "redirect" (masking the URL) the user to the Glassfish website (www.domain.com:8080). Thank you beforehand!

    Read the article

  • Using DNS in iproute2

    - by Oliver
    In my setup I can redirect the default gateway based on the source address. Let's say a user is connected through tun0 (10.2.0.0/16) is redirect to another vpn. That works fine! ip rule add from 10.2.0.10 lookup vpn1 In a second rule I redirect the default gateway to another gateway if the user access a certain ip adress: ip rule add from 10.2.0.10 to 94.142.154.71 lookup vpn2 If I access the page on 94.142.154.71 (myip.is) the user is correctly routed and I can see the ip of the second vpn. On any other pages the ip address of vpn1 is shown. But how do I tell iproute2 that all request at e. g. google.com should be redirected through vpn2?

    Read the article

  • How to Setup an Active Directory Domain-Week 26

    - by OWScott
    Today's lesson covers how to create an Active Directory domain and join a member server to it. This week's topic takes a slightly different turn from the normally IIS related topics, but this is key video to help setup either a test or production environment that requires Active Directory. Part of being a web administrator is understanding the servers and how they interact with each other. This week’s lesson takes a different path than usual and covers how to create an Active Directory domain and how to join a member computer to that domain. In less than 13 minutes we complete the entire process, end to end. An understanding of Active Directory is useful, whether it’s simply to setup a test lab, or to learn more so that you can manage a production domain environment. This week starts a mini-series on web farms. Today’s lesson is on setting up a domain which is a necessary prerequisite for next week which will be on Distributed File System Replication (DFS-R), a useful technology for web farms. Upcoming lessons will cover shared configuration, Application Request Routing (ARR), and more. Additionally, this video introduces us to Vaasnet (www.vaasnet.com), a service that allows the web pro to gain immediate access to an entire lab environment for situations such as these. This is week 26 (the middle week!) of a 52 week series for the Web Pro. Past and future videos can be found here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Rename url hiding file extension

    - by Anusri Roy Chowdhury
    I want to show url http://some.com/designit/portfolio.php?cat=website&subcat=nature as http://some.com/designit/portfolio/website/nature. cat may pe presentor may not.also subcat may present or not I have put .htaccess file in designit folder and code in it is as follows: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php [L,QSA] RewriteRule ^portfolio/?$ portfolio.php[NC,QSA] RewriteRule ^portfolio/([a-zA-Z0-9_-]+)/?$ portfolio.php?cat=$1[L,NC,QSA] it is showing ..some.com/designit/portfolio.php as ..some.com/designit/portfolio but it is not showing ..some.com/designit/portfolio.php?cat=website as ..some.com/designit/portfolio/website.Showing error "Internal Server Error.The server encountered an internal error or misconfiguration and was unable to complete your request." please help me to complete this code.

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • Is it possible to load balance requests from a single source?

    - by Shawn
    In our application, Server A establishes a TCP connection with Server B, then it sends a large amount of requests to Server B over the TCP connection. The request message is XML-based. Server B needs to respond within a very short period, and it takes time to process the requests. So we hope a load balancer can be introduced and we can expedite the processing by using multiple Server B's. This is not a web application. I did some research but failed to find a similar application of load balancer. Can anyone tell me if there's a load balancer can help in our application?

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • iptables prerouting to redirect source ip address on ethernet

    - by Kevin Campion
    I have 2 ip adresses on the Internet who redirect on the same machine. On this machine, one Debian runs on OpenVZ. I can set iptables rules to redirect all http request to the Debian. iptables prerouting -d ip_address_2 DNAT --to ip_address_local_1 +--------------+ | | | V | ip_address_local_1 I| +------+ +----------+ N|ip_address_1 | |-----|Debian1 VE|-- Apache's log T|-----------------|OpenVZ| +----------+ [client ip_address_1] E| | | | R|ip_address_2 | | | N|--------------+ | | E| +------+ T| Iptables' rules : iptables -t nat -A PREROUTING -p tcp -i eth0 -d ip_address_2 --dport 80 -j DNAT --to ip_address_local_1:80 iptables -A FORWARD -p tcp -i eth0 -o venet0 -d ip_address_local_1 --dport 80 -j ACCEPT iptables -A FORWARD -p tcp -i venet0 -o eth0 -s ip_address_local_1 --sport 80 -j ACCEPT When I go to webpage with "http://ip_address_2", I can see the good content but the ip address on access log file is ip_address_1, I would like to see my ISP's ip address. Any ideas?

    Read the article

  • "File does not exist" in apache error log.

    - by Samuurai
    Hi, This is an example of an error in out log file: File does not exist: /var/www/website/female, referer: http://www.website.com/female/dresses/A-Dress-Black "/female" doesn't exist, because we use friendly urls via our .htaccess file which looks like this: RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{http_host} !^www.website.com$ [nc] RewriteRule ^(.*)$ http://www.website.com/$1 [r=301,nc,L] RewriteRule ^News/?$ news.php [NC,L] RewriteRule ^About/?$ about.php [NC,L] RewriteRule ^Contact/?$ contact.php [NC,L] RewriteRule ^Sign-In/Create-Account?$ sign_up_in.php [NC,L] RewriteRule ^Logout?$ sign_up_in.php?l=1 [NC,L] RewriteRule ^Your-Bag?$ your_bag.php [NC,L] RewriteRule ^Help?$ help.php [NC,L] RewriteRule ^Profile?$ profile.php [NC,L] RewriteRule ^Create-Profile?$ profile_create.php [NC,L] # ITEM RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/([A-Za-z0-9-]+)/?$ store_focus.php?sex=$1catName=$2&permalink=$3 [NC,L] # PAGE RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/page/([0-9]+)/?$ store.php?sex=$1&catName=$2&page=$3 [NC,L] # CATEGORY RewriteRule ^([A-Za-z-]+)/([A-Za-z-]+)/?$ store.php?sex=$1&catName=$2 [NC,L] # SEX RewriteRule ^([A-Za-z-]+)/?$ store.php?sex=$1 [NC,L] Every request for a page results in an error. Has anyone encountered this before? Thanks! Beren

    Read the article

  • hosting people asking for my account username and password to enable curl and socket function only for me

    - by Jayapal Chandran
    I have hosted my site in a shared environment. My hosting people disabled socket function all together. and they said that we can enable only for you if i given a written statement. I did but they asked for my control panel login details so they will run some kind of script to enable it. Is it right for the hosting company to ask for credentials. They have the total control so why cant they do it? Edit: Before six months many websites in their server got hacked. So they think it would be because of socket functions and had disabled it. They say they can enable it for specific users who do programming using that and that is by email request.

    Read the article

  • T-Mobile releases an App to unlock mobile devices

    - by Gopinath
    T-Mobile is in no mood to stop innovating and outsmarting its rival wireless network providers in USA. Its been talk of the wireless community and rightly deserves the space for its push to make wireless providers more consumer friendly in USA. Just couple of days after US Government passed a law that made unlocking smartphones legal in USA, T-Mobile released an Android App to make unlocking smartphone as easy as few taps. The app aptly named Device Unlock is available in Android Play Store and at the moment it can unlock only Samsung Galaxy Avant smartphones. The app lets you either temporarily unlock your smartphone for 30 days(very helpful for those who travel and wants to use their phone with other carriers) or send a request to T-Mobile to permanently unlock the device for ever. When user tries to unlock the device, the App verifies the user account to make sure that the account complies with T-Mobile rules. If the rule check passes, the app automatically unlocks the phone. The process is very simple and T-Mobile users are going to love this; the other carrier users would envy to have such a simple process to unlock smartphones. Though this app is available for just Android Play Store and works only Samsung Galaxy Avant smartphone, it looks T-Mobile is testing out this feature on small set of users first to learn and improve unlocking process. Hope to see this app able unlock all T-Mobile devices soon.

    Read the article

  • nginx symlinks permission denied / 403 Forbidden on Mac OSX

    - by Levi Roberts
    So I have an nginx server running on Mac OSX and I am trying to create a symlink in my nginx www directory from somewhere else. In the browser I get the wonderful 403 Forbidden error. I have also tried chmod'ing my life away for the past few hours. There doesn't seem to be anything on the stack about it. One thing concerns me is that I am not sure if symlinks are directly supported by ngninx on Mac. Trying to use disable_symlink directive results in: nginx: [emerg] unknown directive "disable_symlinks" in /usr/local/etc/nginx/nginx.conf:44` Some info about my setup: nginx -v : nginx version: nginx/1.4.2 To create the symlink I do the following: cd /Users/levi/www ln -s "/Users/levi/Desktop/.../client" "/Users/levi/www/client" The error in the log: [error] 11864#0: *7 open() "/Users/levi/www/client" failed (13: Permission denied), client: 127.0.0.1, server: _, request: "GET /client HTTP/1.1", host: "localhost" Any help is much appreciated. Let me know if there's any more information I can give you.

    Read the article

  • SQL SERVER – DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3

    - by Pinal Dave
    This is the third part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In earlier two parts we have seen what is incremental statistics and its simple example. In this blog post we will be discussing about DMV, which will list all the statistics which are enabled for Incremental Updates. SELECT  OBJECT_NAME(sys.stats.OBJECT_ID) AS TableName, sys.columns.name AS ColumnName, sys.stats.name AS StatisticsName FROM   sys.stats INNER JOIN sys.stats_columns ON sys.stats.OBJECT_ID = sys.stats_columns.OBJECT_ID AND sys.stats.stats_id = sys.stats_columns.stats_id INNER JOIN sys.columns ON sys.stats.OBJECT_ID = sys.columns.OBJECT_ID AND sys.stats_columns.column_id = sys.columns.column_id WHERE   sys.stats.is_incremental = 1 If you run above script in the example displayed, in part 1 and part 2 you will get resultset as following. When you execute the above script, it will list all the statistics in your database which are enabled for Incremental Update. The script is very simple and effective. If you have any further improved script, I request you to post in the comment section and I will post that on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Exchange 2010 OWA with Client Certificates

    - by Christian
    I have enabled Client Certificate Authentication for Exchange 2010 through IIS7 and the users are prompted to choose their User Certificate when they log in, but they are all then presented with the following error message Request Url: https://<domain_name>:443/owa/ User host address: <server_ip_address> OWA version: 14.1.355.2 Exception Exception type: System.NullReferenceException Exception message: Object reference not set to an instance of an object. Call stack Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.GetUserIdentities(OwaContext owaContext, OwaIdentity& logonIdentity, OwaIdentity& mailboxIdentity, Boolean& isExplicitLogon, Boolean& isAlternateMailbox, ExchangePrincipal& logonExchangePrincipal) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.InternalDispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.RequestDispatcher.DispatchRequest(OwaContext owaContext) Microsoft.Exchange.Clients.Owa.Core.OwaRequestEventInspector.OnPostAuthorizeRequest(Object sender, EventArgs e) System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) The method I followed to enable Certificate authentiaction was from this post: http://www.miru.ch/2011/04/how-to-enable-certificate-based-authentication-on-exchange-2010/ Any ideas? Google isn't being very helpful

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • "The Operation Failed." accepting METHOD: PUBLISH iCalendar files in .pst account

    - by Jamie Kitson
    if I create a new Mail Profile using the Internet E-mail wizard, ie, creating a new local .pst account, and then try to add a .ics iCalendar file with a METHOD of PUBLISH to the calendar of that account, I get the error "The Operation Failed." If I change any of the above it works ok, eg, if I use an Exchange account or METHOD: REQUEST in the iCalendar file. I'm using Outlook 2010 on Windows 7 but I think the user that originally reported this was using Outlook 2007. Does anyone have any idea of why this might be? Thanks, Jamie Kitson

    Read the article

  • Google Analytics www 301 causing issues with In-Page Analytics

    - by conrad10781
    The closest question I could find to my problem is This one. The similarity is: I have a profile in Google Analytics (GA) that has been collecting data for a year. The domain setting in GA is "http://example.com". The site, however, will redirect any non-www request, to www.example.com, via a typical .htaccess refinement. We do this to keep the traffic on the load balancers. I don't know the method the original user had in place, but we're doing a 301 on any non-www to the www equivalent. I believe this has to be somewhat standard. Where I differ from this question is in the error message I receive when trying to load the In-Page Analytics. I'm instead receiving: Error: The Website in your settings (http://example.com), redirects into a different domain. (http://www.example.com). In-Page Analytics currently works on only one domain. Note that www.example.com and example.com are NOT considered to be on the same domain. Also, make sure you're not redirecting from http:// to https:// or vice versa. I understand what's being explained, it just seems as though this can't be the end-all. I tried updating the Analytics settings, which from day one has been set as "One domain with multiple subdomains", but I don't see any options to change the URL ( which is currently set to http://example.com and not http://www.example.com ). I'd prefer not to have to change the URL if that was at all possible, but I can't seem to find any documentation or anything that provide any possible solutions.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >