Search Results

Search found 26165 results on 1047 pages for 'iis7 failed request'.

Page 26/1047 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Is it actually possible to programmatically manage the state of an FTP server in IIS7?

    - by nbolton
    I'm able to manage FTP sites via the IIS manager, however, all attempts so far to manage the state of FTP sites using other means have failed, including: Using the IIS7 API (Microsoft.Web.Administration) Using WMI (with IIS6 compatibility enabled) Using the AppCmd tool in System32\inetsrv Related questions: Why am I unable to get Site.State for an FTP site, when using Microsoft.Web.Administration? Are there any workarounds I haven't tried? My objective is to manage (start/stop/query the state of) the FTP sites with C# code (as you can see from the 3 above attempted workarounds). When querying the FTP server state using WMI, it returns code 4, which means "Stopped", even though the site is definitely shown as running in IIS manager. AppCmd is useless, as it returns "Unknown" for FTP sites: c:\Windows\System32\inetsrv>appcmd list site SITE "Default Web Site" (id:1,bindings:http/*:80:,state:Stopped) SITE "Default FTP Site" (id:2,bindings:ftp/*:21:,state:Unknown)

    Read the article

  • Asp.Net application/web service not working on Vista/IIS7: access right problem?

    - by Achim
    I have an .Net 3.5 based web service running at http://localhost/serivce.svc/. Then I have a Asp.Net application running at http://localhost/myApp. In Application_Load my App reads some XML configuration from the web service. That works fine on my machine, but: On Vista with IIS7 the request to the web services fails. The web service can be accessed via the browser without any problem. I configured the app pool of my App to run as admin. I added the admin to the IIS_USRS group, but it still cannot access the web service. impersonate=true/false seems not to make a difference.

    Read the article

  • Can I get information about the IIS7 virtual directory from Application_Start?

    - by Keith
    I have 3 IIS7 virtual directories which point to the same physical directory. Each one has a unique host headers bound to it and each one runs in its own app pool. Ultimately, 3 instances of the same ASP.NET application. In the Application_Start event handler of global.asax I would like to identify which instance of the application is running (to conditionally execute some code). Since the Request object is not available, I cannot interrogate the current URL so I would like to interrogate the binding information of the current virtual directory? Since the host header binding is unique for each site, it would allow me to identify which application instance is starting up. Does anyone know how to do this or have a better suggestion?

    Read the article

  • MVC Entity Framework: Cannot open user default database. Login failed.

    - by Michael
    This type of stuff drives me nuts. I'm having trouble finding the exact issue that I'm having, maybe I just don't know the terminology. Anyway, I had a working website using MVC and Entity Framework, but then I coded an error in a partial view page (ascx). Then all of a sudden I started to get this message. Cannot open user default database. Login failed. Login failed for user 'NT AUTHORITY\SYSTEM' I've found plenty of suggestions about opening SQL Server Management Studio, Double Click on Security, Double Click on Logins, Double click on NT AUTHORITY\SYSTEM and then double click on User Mapping. In this view I'm suppose to check the box for my database so that this user is mapped to this login. However, since I created my database in Visio Studio 2008 as part of my solution, it doesn't show up to allow me to click on it. So what do I do now? What drives me nuts is that everything was working fine. I was using my computer name to access the website and everything was working fine until the coding error. I've fix the error but still getting the error. I should also mention that this error started yesterday too around the same time but later cleared itself up. If I use localhost to access the site, it works just fine. IIS7 configuration for my website: Application Pool = DefaultAppPool Physical Path Credentials Logon = ClearText With in connection strings. I do see the one for my database in this solution. Entry Type is local metadata=res://*/Models.DataModel.csdl|res://*/Models.DataModel.ssdl|res://*/Models.DataModel.msl; provider=System.Data.SqlClient; provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\FFBall.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True" And somewhere I remember changing the identity from Network Service to LocalSystem. Because when I first stared I was getting this same message, but I changed this value and it started working. I saw that suggested somewhere too but I do not recall. Wait I remember now, I believe in IIS7, under Application Pools, DefaultAppPool identity is set to LocalSystem. Additional things I've tried. Restart the computer Recycle the application pool. Antivirus isn't running. Any help would be appreciated. Thank you in advance.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Change the Default Application Pool in IIS7 using .net?

    - by EdenMachine
    I'm using the following function to create a IIS7 Application and/or Virtual Directory. How would I also set the Application to use a different Application Pool? Private Sub CreateVirtualDir(ByVal WebSite As String, ByVal AppName As String, ByVal Path As String, Optional ByVal IsApplication As Boolean = True, Optional ByVal RunScripts As Boolean = True, Optional ByVal IsWrite As Boolean = True) Dim IISSchema As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/Schema/AppIsolated") Dim CanCreate As Boolean = Not IISSchema.Properties("Syntax").Value.ToString.ToUpper() = "BOOLEAN" IISSchema.Dispose() If CanCreate Then Dim PathCreated As Boolean Try Dim IISAdmin As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/W3SVC/1/Root") 'make sure folder exists If Not System.IO.Directory.Exists(Path) Then System.IO.Directory.CreateDirectory(Path) PathCreated = True End If 'If the virtual directory already exists then delete it For Each VD As System.DirectoryServices.DirectoryEntry In IISAdmin.Children If VD.Name = AppName Then IISAdmin.Invoke("Delete", New String() {VD.SchemaClassName, AppName}) IISAdmin.CommitChanges() Exit For End If Next VD 'Create and setup new virtual directory Dim VDir As System.DirectoryServices.DirectoryEntry = IISAdmin.Children.Add(AppName, "IIsWebVirtualDir") VDir.Properties("Path").Item(0) = Path If IsApplication Then VDir.Properties("AppFriendlyName").Item(0) = AppName End If VDir.Properties("EnableDirBrowsing").Item(0) = False VDir.Properties("AccessRead").Item(0) = True VDir.Properties("AccessExecute").Item(0) = False VDir.Properties("AccessWrite").Item(0) = IsWrite VDir.Properties("AccessScript").Item(0) = RunScripts VDir.Properties("AuthNTLM").Item(0) = True VDir.Properties("EnableDefaultDoc").Item(0) = True VDir.Properties("DefaultDoc").Item(0) = "default.htm,default.aspx,default.asp" VDir.Properties("AspEnableParentPaths").Item(0) = True 'VDir.Properties("AppCreate").Item(0) = False VDir.CommitChanges() 'the following are acceptable params 'INPROC = 0 'OUTPROC = 1 'POOLED = 2 If IsApplication Then VDir.Invoke("AppCreate", 1) Else VDir.Invoke("AppCreate", False) End If Catch Ex As Exception If PathCreated Then System.IO.Directory.Delete(Path) End If 'MsgBox(Ex.Message) End Try End If End Sub

    Read the article

  • Are the old httpHandlers and httpModules elements needed in IIS7?

    - by James Newton-King
    I'd like to clean up the web.config and remove unneeded XML. A default ASP.NET 3.5 web application has the follow elements in the web.config: <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> When running under IIS7, which has modules and handlers being registered under the system.webServer element, is the configuration above still needed?

    Read the article

  • Post request with body_stream and parameters

    - by Damien MATHIEU
    Hello, I'm building some kind of proxy. When I call some url in a rack application, I forward that request to an other url. The request I forward is a POST with a file and some parameters. I want to add more parameters. But the file can be quite big. So I send it with Net::HTTP#body_stream instead of Net::HTTP#body. I get my request as a Rack::Request object and I create my Net::HTTP object with that. req = Net::HTTP::Post.new(request.path_info) req.body_stream = request.body req.content_type = request.content_type req.content_length = request.content_length http = Net::HTTP.new(@host, @port) res = http.request(req) I've tried several ways to add the proxy's parameters. But it seems nothing in Net::HTTP allows to add parameters to a body_stream request, only to a body one. Is there a simpler way to proxy a rack request like that ? Or a clean way to add my parameters to my request ?

    Read the article

  • Create a new app pool and assign it to a site subfolder on a remote host, using C# and IIS7

    - by Soeren
    I have a web site running on IIS7 on a remote server. I would like to do the following: Create a new subfolder under the root virtual directory. Create a new app pool. Add this new app pool to the new subfolder Normally, I would do this manually in IIS by first creating the app pool, and then right-clicking the sub folder an choose "add application", but I need to do this programmatically in C#. I've managed to make the above points 1 and 2 work, but I can't find the way to adding the application to the sub folder. This is the code I have used so far for 1 and 2: ServerManager mgr = new ServerManager(); ApplicationPool myAppPool = mgr.ApplicationPools.Add("MyAppPool"); myAppPool.AutoStart = true; myAppPool.Cpu.Action = ProcessorAction.KillW3wp; myAppPool.ManagedPipelineMode = ManagedPipelineMode.Integrated; myAppPool.ManagedRuntimeVersion = "V4.0"; myAppPool.ProcessModel.IdentityType = ProcessModelIdentityType.NetworkService; mgr.CommitChanges(); if (!Directory.Exists(@"D:\webroot\TestSite\NytSite")) { Directory.CreateDirectory(@"D:\webroot\TestSite\NytSite"); } So, I need to add "MyAppPool" to the "NytSite" folder... Is this even the correct way to do this? Any experiences out there? Thnx

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • URL Parts available to URL Rewrite Rules

    - by OWScott
    URL Rewrite is a powerful URL rewriting tool available for IIS7 and newer.  Your rewriting options are almost unlimited, giving you the ability to optimize URLs for search engine optimization (SEO), support multiple domain names on a single site, hiding complex paths and much more. URL Rewrite allows you to use any Server Variable as conditions, and with URL Rewrite 2.0, you can also update them on the fly.  To see all variables available to your site, see this post. An understanding of the parts of a complete URL are essential to working with URL Rewrite, so I’ll include the basics here.  Ruslan Yakushev’s configuration reference was my authoritative source for this. Take this URL for example: The URL is http://www.bing.com/search?q=IIS+url+rewrite The parts of the URL are: http(s)://<host>:<port>/<path>?<querystring> Part Example Server Variable http(s) http SERVER_PORT_SECURE or HTTPS = on/off <host> www.bing.com HTTP_HOST <port> Default is 80 SERVER_PORT <path> search The rule pattern in URL Rewrite <path> /search PATH_INFO <querystring> q=IIS+url+rewrite QUERY_STRING entire URL path with querystring /search?q=IIS+url+rewrite REQUEST_URI It’s important to note that /, : and ? aren’t included in some of the server variables. Understanding which slashes are included is important to creating successful rules.

    Read the article

  • Creating an HttpHandler to handle request of your own extension

    - by Jalpesh P. Vadgama
    I have already posted about http handler in details before some time here. Now let’s create an http handler which will handle my custom extension. For that we need to create a http handlers class which will implement Ihttphandler. As we are implementing IHttpHandler we need to implement one method called process request and another one is isReusable property. The process request function will handle all the request of my custom extension. so Here is the code for my http handler class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; namespace Experiement { public class MyExtensionHandler:IHttpHandler { public MyExtensionHandler() { //Implement intialization here } bool IHttpHandler.IsReusable { get { return true; } } void IHttpHandler.ProcessRequest(HttpContext context) { string excuttablepath = context.Request.AppRelativeCurrentExecutionFilePath; if (excuttablepath.Contains("HelloWorld.dotnetjalps")) { Page page = new HelloWorld(); page.AppRelativeVirtualPath = context.Request.AppRelativeCurrentExecutionFilePath; page.ProcessRequest(context); } } } } Here in above code you can see that in process request function I am getting current executable path and then I am processing that page. Now Lets create a page with extension .dotnetjalps and then we will process this page with above created http handler. so let’s create it. It will create a page like following. Now let’s write some thing in page load Event like following. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Experiement { public partial class HelloWorld : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World"); } } } Now we have to tell our web server that we want to process request from this .dotnetjalps extension through our custom http handler for that we need to add a tag in httphandler sections of web.config like following. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add verb="*" path="*.dotnetjalps" type="Experiement.MyExtensionHandler,Experiement"/> </httpHandlers> </system.web> </configuration> That’s it now run that page into browser and it will execute like following in browser That’s you.. Isn’t it cool.. Stay tuned for more.. Happy programming.. Technorati Tags: HttpHandler,ASP.NET,Extension

    Read the article

  • Access .accdb format’s

    - by wisecarver
    Were you aware there are two versions of the Access .accdb file format? The 2007 “ACE” version and the 2010 “ACE” version both use different drivers. If you’ve tried to use the Access Database .accdb format on DiscountASP.NET servers and it failed you must have been using a Data file created with the 2010 version of Access. OK, you’re shouting at me right now for even suggesting Access Databases for Shared hosting, right? I agree, SQL Server is the way to go and I personally help a lot of developers...(read more)

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • Summary of our Recent Pull Request Enhancements on CodePlex

    Over the past several weeks, we’ve been incrementally rolling out a bunch of enhancements around our pull request workflow for Git and Mercurial projects. Our goal is to make contributing to open source projects a simple and rewarding experience, and we’ll continue to invest in this area. Here’s a summary of the changes so far, in case you’ve missed them. As always, if you have any feedback, please let us know, whether on our ideas page or via Twitter. Support for branches You can now pick the source and destination branches for your pull request, whether you’re sending one from your fork, or using it within a project to collaborate with your other trusted contributors. A redesigned creation experience Our old pull request creation form was rather lacking. It asked for a title and comment in a small modal dialog, but that was about it. We knew we could do better, so we rethought the experience. Now, when you create a pull request, you’re taken to a new page that let’s you select the source and destination, and gives you information on the diffs and commits that you’re sending, so you can confirm that you’re sending the right set of changes. Inline code snippets in discussion If users comment on code in your pull request, we now display a preview of the snippet of relevant code inline with their comment on the discussion. Subsequent replies on that line are combined in a single thread to preserve your context. No more clicking and hunting to find where the comments are. And you can add another inline comment right from the discussion area. Comment notifications You can now elect receive an e-mail notification if a user comments on your pull request. If it’s on a line of code, we’ll display the relevant code snippet in the e-mail. Redesigned diff viewer Our old diff viewer hadn’t been touched in a while, and was in need of an update. We started with a visual facelift to use standard red/green colors for additions/deletions and remove the noisy “dots” that represented spaces and that littered the diff viewer. Based on feedback that the viewable region for diffs was too small, especially for smaller screen resolutions, we revamped the way the viewport for the code is sized, and now expand it to fill the majority of the browser height when scrolling down. The set of improvements we implemented here also apply anywhere diffs are viewed, not just for pull requests.

    Read the article

  • xen 4.1 host priodically dropping network packets of domU

    - by Dyutiman Chakraborty
    I have xen 4.1 Host running on a ubuntu 12.04 LTS Server with ip 153.x.x.54. I have setup 2 VMs on it, namely, "dev.mydomain.com" and "web.mydomain.com" with ips 195.X.X.2 and 195.x.x.3 respectively. For network the VMs connect through xendbr0 (xen-bridge), and can accces the network properly. I can also login to the VMs with ssh with no issue. However when I ping any of the VMs, there is a high amount of periodic packet drop. If I the ping the xen host (dom0) there is no packet drop. Following is a output of "tcpdump | grep ICMP" on dOM0 while I was pinging one of the domU tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 05:19:55.682493 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 30, length 64 05:19:56.691144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 31, length 64 05:19:57.698776 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 32, length 64 05:19:58.706784 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 33, length 64 05:19:59.714751 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 34, length 64 05:20:00.723144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 35, length 64 05:20:01.730349 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 36, length 64 05:20:02.739017 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 37, length 64 05:20:03.746806 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 38, length 64 05:20:06.770326 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 41, length 64 05:20:07.778801 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 42, length 64 05:20:08.786481 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 43, length 64 05:20:09.794720 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 44, length 64 05:20:10.802395 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 45, length 64 05:20:11.810770 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 46, length 64 05:20:12.818511 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 47, length 64 05:20:13.826817 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 48, length 64 05:20:14.835125 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 49, length 64 05:20:15.842138 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 50, length 64 05:20:18.274072 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 1, length 64 05:20:19.282347 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 2, length 64 05:20:20.290746 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 3, length 64 05:20:21.297910 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 4, length 64 05:20:22.305656 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 5, length 64 05:20:23.314369 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 6, length 64 05:20:24.322055 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 7, length 64 05:20:25.329782 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 8, length 64 05:20:26.338473 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 9, length 64 05:20:27.346411 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 10, length 64 05:20:28.354175 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 11, length 64 05:20:29.361640 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 12, length 64 05:20:30.370026 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 13, length 64 05:20:31.377696 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 14, length 64 05:20:32.386151 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 15, length 64 05:20:33.394118 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 16, length 64 05:20:34.402058 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 17, length 64 05:20:35.409002 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 18, length 64 05:20:36.417692 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 19, length 64 05:20:36.496916 IP6 fe80::3285:a9ff:feec:fc69 > ip6-allnodes: HBH ICMP6, multicast listener querymax resp delay: 1000 addr: ::, length 24 05:20:36.499112 IP6 fe80::21c:c0ff:fe6c:c091 > ff02::1:ff6c:c091: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff6c:c091, length 24 05:20:36.507041 IP6 fe80::227:eff:fe11:fa3f > ff02::1:ff00:2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:2, length 24 05:20:36.523919 IP6 fe80::21c:c0ff:fe77:6257 > ff02::1:ff77:6257: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:6257, length 24 05:20:36.544785 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff12:ea9a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff12:ea9a, length 24 05:20:36.581740 IP6 fe80::5604:a6ff:fef1:6da7 > ff02::1:fff1:6da7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:fff1:6da7, length 24 05:20:36.600103 IP6 fe80::8a8:8aa0:5e18:917a > ff02::1:ff18:917a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff18:917a, length 24 05:20:36.601989 IP6 fe80::227:eff:fe11:fa3e > ff02::1:ff11:fa3e: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff11:fa3e, length 24 05:20:36.611090 IP6 fe80::dcad:56ff:fe57:3bbe > ff02::1:ff57:3bbe: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff57:3bbe, length 24 05:20:36.660521 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff00:6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:6, length 24 05:20:36.698871 IP6 fe80::21e:8cff:feb4:9f89 > ff02::1:ffb4:9f89: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffb4:9f89, length 24 05:20:36.776548 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff01:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:7, length 24 05:20:36.781910 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff00:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:3, length 24 05:20:36.865475 IP6 fe80::21c:c0ff:fe4a:ae9f > ff02::1:ff4a:ae9f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff4a:ae9f, length 24 05:20:36.908333 IP6 fe80::dcad:45ff:fe90:84db > ff02::1:ff90:84db: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff90:84db, length 24 05:20:36.919653 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff00:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:7, length 24 05:20:36.924276 IP6 fe80::59a2:2a4a:2082:6dee > ff02::1:ff82:6dee: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff82:6dee, length 24 05:20:37.001905 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff8f:6dd: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff8f:6dd, length 24 05:20:37.042403 IP6 fe80::54:ff:fe95:54f2 > ff02::1:ff95:54f2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff95:54f2, length 24 05:20:37.090992 IP6 fe80::21c:c0ff:fe77:62ac > ff02::1:ff77:62ac: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:62ac, length 24 05:20:37.098118 IP6 fe80::d63d:7eff:fe01:b67f > ff02::1:ff01:b67f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:b67f, length 24 05:20:37.118784 IP6 fe80::54:ff:fe12:ea9a > ff02::202: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::202, length 24 05:20:37.168548 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff02:1d31: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff02:1d31, length 24 05:20:41.743286 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 1, length 64 05:20:41.743542 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 1, length 64 05:20:42.743859 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 2, length 64 05:20:42.743952 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 2, length 64 05:20:43.745689 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 3, length 64 05:20:43.745777 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 3, length 64 05:20:44.746706 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 4, length 64 05:20:44.746796 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 4, length 64 05:20:45.747986 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 5, length 64 05:20:45.748082 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 5, length 64 05:20:46.749834 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 6, length 64 05:20:46.749920 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 6, length 64 05:20:47.750838 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 7, length 64 05:20:47.751182 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 7, length 64 05:20:48.751909 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 8, length 64 05:20:48.751991 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 8, length 64 05:20:49.752542 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 9, length 64 05:20:49.752620 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 9, length 64 05:20:50.754246 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 10, length 64 05:20:51.753856 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 11, length 64 05:20:52.752868 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 12, length 64 05:20:53.754174 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 13, length 64 05:20:54.753972 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 14, length 64 05:20:55.753814 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 15, length 64 05:20:56.753391 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 16, length 64 05:20:57.753683 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 17, length 64 05:20:58.753487 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 18, length 64 05:20:59.754013 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 19, length 64 05:21:00.753169 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 20, length 64 05:21:01.753757 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 21, length 64 05:21:02.753307 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 22, length 64 05:21:03.753021 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 23, length 64 05:21:04.753628 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 24, length 64 ^C479 packets captured 718 packets received by filter 238 packets dropped by kernel 3 packets dropped by interface You see the ping request is not responed to initially, then for a moment it is replied back and then again no reply. I have tried everything (to the best of my knowledge) to fix this, but can't find any answer Any help will be greatly appreciated Thanks.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • AS11 Oracle B2B Sync Support - Series 2

    - by sinkarbabu.kirubanithi
    In the earlier series, we discussed about how to model "Sync Support" in Oracle B2B. And, we haven't discussed how the response can be consumed synchronously by the back-end application or initiator of sync request. In this sequel, we will see how we can extend it to the SOA composite applications to model the end-to-end usecase, this would help the initiator of sync request to receive the response synchronously. Series 2 - is little lengthier for blog standards so be prepared before you continue further :). Let's start our discussion with a high-level scenario where one need to initiate a synchronous request and get response synchronously. There are various approaches available, we will see one simplest approach here. Components Involved: 1. Oracle B2B 2. Oracle JCA JMS Adapter 3. Oracle BPEL 4. All of the above are wrapped up in a single SOA composite application. Oracle B2B: Skipping the "Sync Support" setup part in B2B, as we have already discussed that in the earlier series 1. Here we have provided "Sync Support" samples that can be imported to B2B directly and users can start testing the same in few minutes. Initiator Sample: This requires two JMS queues to be created, one for B2B to receive initial outbound sync request and the other is for B2B to deliver the incoming sync response to the back-end. Please enable "Use JMS Id" option in both internal listening and delivery channels. This would enable JCA JMS Adapter to correlate the initial B2B request and response and in turn it would be returned as synchronous response of BPEL. Internal Listening Channel Image: Internal Delivery Channel Image: To get going without much challenges, just create queues in Weblogic with the JNDI mentioned in the above two screenshots. If you want to use different names, then you may have to change the queue jndi names in sample after importing it into B2B. Here are the Queue related JNDI names used in the sample, 1. Internal Listening Channel Queue details, Name: JNDI Name: jms/b2b/syncreplyqueue 2. Internal Delivery Channel Queue details, Name: JNDI Name: jms/b2b/syncrequestqueue Here is the Initiator Sample Acme.zip Note: You may have to adjust the ip address of GlobalChips endpoint in the Delivery Channel. Responder Sample: Contains B2B meta-data and the Callout. Just import the sample and place the callout binary under "/tmp/callout" directory. If you choose to use a different location for callout, then you may have to change the same in B2B Configuration after importing the sample. Here are the artifacts, 1. Callout Source SampleCallout.java 2. Callout Binary sample-callout.jar 3. Responder Sample GlobalChips.zip Callout Details: Just gives the static response XML that needs to be sent back as response for the inbound sync request. For a sample purpose, we have given static response but in production you may have to invoke a web service or something similar to get the response. IMPORTANT NOTE: For Sync Support use case, responder is not expected to deliver the inbound sync request to backend as the process of delivering and getting the response from backend are expected from the Callout. This default behavior can be overridden by enabling the config property "b2b.SyncAppDelivery=true" in B2B config mbean (b2b-config.xml). This makes B2B to deliver the inbound sync request to be delivered to backend queue but the response to be sent to remote caller still has to come from Callout. 2. Oracle JCA JMS Adapter: On the initiator side, we have used JCA JMS Request/Reply pattern to send/receive the synchronous message from B2B. 3. Oracle BPEL: Exposes WS-SOAP Endpoint that takes payload as input and passes the same to B2B and returns the synchronous response of B2B as SOAP response. For outside world, it looks as if it is the synchronous web service endpoint but under the cover it uses JMS to trigger/initiate B2B to send and receive the synchronous response. 4. Composite application: All the components discussed above are wired in SOA composite application that helps to model a end-to-end synchronous use case. Here's the composite application sca_B2BSyncSample_rev1.0.jar, you may just deploy this to your AS11 SOA to make use of it. For any editing, you can just import the project in your JDEV under any SOA Application. Here are the composite application screenshots, Composite Application: BPEL With JCA JMS Adapter (Request/Reply):

    Read the article

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >