Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 662/889 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • Visual Studio 2008 doesn't create *.refresh files for external DLL references... what am I missing?

    - by Cory Larson
    Hi all-- I've got a question about something that's just been irritating me. A colleague and I are building a support framework for our current client that we want to reference in other projects. The DLL we want as a reference in our project would be an external reference. We're adding it by doing "Add Reference...", then browsing to the location of the .dll. What I want Visual Studio to do is only add the .xml, .pdb, and a .dll.refresh file, but instead it copies the actual .dll (and .xml and .pdb) into the bin. When we rebuild the framework project, the other project that uses its .dll gets all out of whack until we drop and re-add the reference. Everything I've read online says that VS2008 is supposed to create the .dll.refresh files for you, but it never does. Any ideas? Am I missing something or doing something wrong? At this point I'm ready to add a pre-build event to simply copy the framework .dll into my bin, but the .refresh file seems like less of a hassle if it would just work. Thanks, Cory

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • Why LINQ to Entities won't let me initialize just some properties of an Entity?

    - by emzero
    So I've started to add Entity Framework 4 into a legacy web application (ASP.NET WebForms). As a start I have auto-generated some entities from the database. Also I want to apply Repository Pattern. There is an entity called Visitor and its repository VisitorRepository In VisitorRepository I have the following method: public IEnumerable<Visitor> GetActiveVisitors() { var visitors = from v in _context.Visitors where v.IsActive select new Visitor { VisitorID = v.VisitorID, EmailAddress = v.EmailAddress, RegisterDate = v.RegisterDate, Company = v.Company, Position = v.Position, FirstName = v.FirstName, LastName = v.LastName }; return visitors.ToList(); } That List is then bound to a repeater and when trying to do <%# Eval('EmailAddress') #%> it throws the following exception. The entity or complex type 'Model.Visitor' cannot be constructed in a LINQ to Entities query. A) Why is this happening? How I can workaround this? Do I need to select an anonymous type and then use that to initialize my entities??? B) Why every example I've seen makes use of 'select new' (anonymous object) instead of initializing a known type? Anonymous types are useless unless you are retrieving the data and showing it in the same layer. As far as I know anonymous types cannot be returned from methods? So what's the real point of them??? Thank you all

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • Making TeamCity integrate the Subversion build number into the assembly version.

    - by Lasse V. Karlsen
    I want to adjust the output from my TeamCity build configuration of my class library so that the produced dll files have the following version number: 3.5.0.x, where x is the subversion revision number that TeamCity has picked up. I've found that I can use the BUILD_NUMBER environment variable to get x, but unfortunately I don't understand what else I need to do. The "tutorials" I find all say "You just add this to the script", but they don't say which script, and "this" is usually referring to the AssemblyInfo task from the MSBuild Community Extensions. Do I need to build a custom MSBuild script somehow to use this? Is the "script" the same as either the solution file or the C# project file? I don't know much about the MSBuild process at all, except that I can pass a solution file directly to MSBuild, but what I need to add to "the script" is XML, and the solution file decidedly does not look like XML. So, can anyone point me to a step-by-step guide on how to make this work? This is what I ended up with: Install the MSBuild Community Tasks Edit the .csproj file of my core class library, and change the bottom so that it reads: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" /> <Target Name="BeforeBuild"> <AssemblyInfo Condition=" '$(BUILD_NUMBER)' != '' " CodeLanguage="CS" OutputFile="$(MSBuildProjectDirectory)\..\GlobalInfo.cs" AssemblyVersion="3.5.0.0" AssemblyFileVersion="$(BUILD_NUMBER)" /> </Target> <Target Name="AfterBuild"> Change all my AssemblyInfo.cs files so that they don't specify either AssemblyVersion or AssemblyFileVersion (in retrospect, I'll look into putting AssemblyVersion back) Added a link to the now global GlobalInfo.cs that is located just outside all the project Make sure this file is built once, so that I have a default file in source control This will now update GlobalInfo.cs only if the environment variable BUILD_NUMBER is set, which it is when I build through TeamCity. I opted for keeping AssemblyVersion constant, so that references still work, and only update AssemblyFileVersion, so that I can see which build a dll is from.

    Read the article

  • Need help understanding WPF MeasureOverride and ArrangeOverride infinity and 0 sizes

    - by Scott Bilas
    I'm trying to build a simple panel that contains one child and will snap it to nearest grid sizes. I'm having a hard time figuring out how to do this by overriding MeasureOverride and ArrangeOverride. Here's what I'm after: I want my panel to tell its owner that (a) it wants to be as large as possible, then (b) when it finds out what that size is, it will size itself and its child UIElement according to the nearest smaller snap point. So if we're snapping to 10's, and I can be in a region no bigger than 192x184, the panel will tell its parent container "my actual size is going to be 190x180". That way anything bordering my control will be able to align to its edges, as opposed to the potential space. When I put my panel inside of a Grid, I get either 0 or PositiveInfinity (I forget) for the incoming size in the overrides, but what I need to know is "how big can my space actually get?" not infinities.. Part of the problem I think is what WPF considers magic values of PositiveInfinity and 0 for size. I need a way to say, via MeasureOverride "I can be as big as you will allow me" and in ArrangeOverride to actually size to the snapped size. Or am I going about this the completely wrong way? Measuring and arranging looks very complicated, just from wandering around a little in the code for the standard panels in Reflector.

    Read the article

  • T-SQL: Compute Subtotals For A Range Of Rows

    - by John Dibling
    MSSQL 2008. I am trying to construct a SQL statement which returns the total of column B for all rows where column A is between 2 known ranges. The range is a sliding window, and should be recomputed as it might be using a loop. Here is an example of what I'm trying to do, much simplified from my actual problem. Suppose I have this data: table: Test Year Sales ----------- ----------- 2000 200 2001 200 2002 200 2003 200 2004 200 2005 200 2006 200 2007 200 2008 200 2009 200 2010 200 2011 200 2012 200 2013 200 2014 200 2015 200 2016 200 2017 200 2018 200 2019 200 I want to construct a query which returns 1 row for every decade in the above table, like this: Desired Results: DecadeEnd TotalSales --------- ---------- 2009 2000 2010 2000 Where the first row is all the sales for the years 2000-2009, the second for years 2010-2019. The DecadeEnd is a sliding window that moves forward by a set ammount for each row in the result set. To illustrate, here is one way I can accomplish this using a loop: declare @startYear int set @startYear = (select top(1) [Year] from Test order by [Year] asc) declare @endYear int set @endYear = (select top(1) [Year] from Test order by [Year] desc) select @startYear, @endYear create table DecadeSummary (DecadeEnd int, TtlSales int) declare @i int -- first decade ends 9 years after the first data point set @i = (@startYear + 9) while @i <= @endYear begin declare @ttlSalesThisDecade int set @ttlSalesThisDecade = (select SUM(Sales) from Test where(Year <= @i and Year >= (@i-9))) insert into DecadeSummary values(@i, @ttlSalesThisDecade) set @i = (@i + 9) end select * from DecadeSummary This returns the data I want: DecadeEnd TtlSales ----------- ----------- 2009 2000 2018 2000 But it is very inefficient. How can I construct such a query?

    Read the article

  • Stateful vs. Stateless Webservices

    - by chrsk
    Imagine a more complex CRUD application which has a three-tier-architecture and communicates over webservices. The client starts a conversation to the server and doing some wizard like stuff. To process the wizard the client needs feedback given by the server. We started a discussion about stateful or stateless webservices for this approach. I made some research combined with my own experience, which points me to the question mentioned later. Stateless webservices having the following properties (in our case): + high scalability + high availability + high speed + rapid testing - bloated contract - implementing more logic on server-side But we can cross out the first two points, our application doesn't needs high scalability and availability. So we come to the stateful webservice. I've read a bunch of blogs and forum posts and the most invented point implementing a stateful webservice was: + simplifies contract (protocol) - bad testing - runs counter to the basic architecture of http But doesn't almost all web applications have these bad points? Web applications uses cookies, query strings, session ids, and all the stuff to avoid the statelessness of http. So why is it that bad for webservices?

    Read the article

  • ASP.Net: IHttpAsyncHandler and AsyncProcessorDelegate

    - by ctrlShiftBryan
    I have implemented an IHttpAsyncHandler. I am making about 5 different AJAX calls from a webpage that has widgets to that handler. One of those widgets takes about 15 seconds to load(because of a large database query) the others should all load in under a second. The handler is responding in a synchronous manner. I am getting very inconsistent results. The ProcessRequest method is using Session and other class level variables. Could that be causing different requests to use the same thread instead each there own? I'm getting this... Request1 --- response 1 sec Request2 --- response 14 sec Request3 --- response 14.5 sec Request4 --- response 15 sec Request5 --- response 15.5 sec but I'm looking for something more like this... Request1 --- response 1 sec Request2 --- response 14 sec Request3 --- response 1.5 sec Request4 --- response 2 sec Request5 --- response 1.5 sec Without posting too much code my implementation of the IHttpAsyncHandler methods are pretty standard. private AsyncProcessorDelegate _Delegate; protected delegate void AsyncProcessorDelegate(HttpContext context); IAsyncResult IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { _Delegate = new AsyncProcessorDelegate(ProcessRequest); return _Delegate.BeginInvoke(context, cb, extraData); } void IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) { _Delegate.EndInvoke(result); } Putting a debug break point in my IHttpAsyncHandler.BeginProcessRequest method I can see that the method isn't being fired until the last Process is complete. Also my machine.config has this entry... processModel autoConfig="true" with no other properties set. What else do I need to check for?

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

  • SPFileVersionCollection - why versions are sorted in mixed order?

    - by Janis Veinbergs
    SPFileVersionCollection and SPListItemVersionCollection versioning seems inconsistent to me. Inconsistency wouldn't be a problem to me, but sort order is. SPListItemVersionCollection I can understand versioning of ListItems as they are stored in descending order: SPContext.Current.ListItem.Versions.Count -> 5 SPContext.Current.ListItem.Versions[0].VersionId -> 1026 (2.2 latest version) SPContext.Current.ListItem.Versions[1].VersionId -> 1025 (2.1) SPContext.Current.ListItem.Versions[2].VersionId -> 1024 (2.0) ... [4].VersionId -> (oldest version) SPFileVersionCollection However I can't understand how version numbers are saved for a document library item: SPContext.Current.ListItem.File.Versions.Count -> 4 SPContext.Current.ListItem.File.Versions[0].ID -> 512 (1.0 oldest one) SPContext.Current.ListItem.File.Versions[1].ID -> 513 (1.1) SPContext.Current.ListItem.File.Versions[2].ID -> 1025 (2.1 latest version) SPContext.Current.ListItem.File.Versions[3].ID -> 1024 (2.0 (EDIT: IsCurrentVersion = True)) They are nor in ascending order, nor descending, but something mixed. Is there any reason for SharePoint team to decide to store SPFile versions like that? And do they expect that I write my own method to get latest version or is there a builtin one for that? A note: Let me point out that SPListItem.File is not null for document library items.

    Read the article

  • Silverlight 4 with WCF RIA architecture applying DDD

    - by doteneter
    Hello, In my ASP.NET MVC applications I use DDD and it works very well. I'm new to Silverlight development and would like to know how could I apply DDD to build a new architecture. I had a look on WCF RIA Services and what is exposed by default it's the simple CRUD methods. I would like to use MVVM pattern. I thought about general architecture and don't know if what I'm thinking about make sense in Silverlight development. I thought about creating Domain Model on the top of SVC. I would than expose by WCF RIA some operation that deals with aggreates in my Domain Model instead of simple CRUD. What I would aloso expose is the ViewModel entieties that could be used by the view. I don't know if it's make sense, if I'm going in a good direction or if applying DDD in Silverlight 4 development is a good practice. I didn't find much informations on Internet. I'll appreciate if you could point me to some interesting links or if you can give me some hints. Thanks for your help.

    Read the article

  • MPNowPlayingInfoCenter defaultCenter will not update or retrieve information

    - by Jayson Lane
    I'm working to update the MPNowPlayingInfoCenter and having a bit of trouble. I've tried quite a bit to the point where I'm at a loss. The following is my code: self.audioPlayer.allowsAirPlay = NO; Class playingInfoCenter = NSClassFromString(@"MPNowPlayingInfoCenter"); if (playingInfoCenter) { NSMutableDictionary *songInfo = [[NSMutableDictionary alloc] init]; MPMediaItemArtwork *albumArt = [[MPMediaItemArtwork alloc] initWithImage:[UIImage imageNamed:@"series_placeholder"]]; [songInfo setObject:thePodcast.title forKey:MPMediaItemPropertyTitle]; [songInfo setObject:thePodcast.author forKey:MPMediaItemPropertyArtist]; [songInfo setObject:@"NCC" forKey:MPMediaItemPropertyAlbumTitle]; [songInfo setObject:albumArt forKey:MPMediaItemPropertyArtwork]; [[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:songInfo]; } This isn't working, I've also tried: [[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:nil]; In an attempt to get it to remove the existing information from the iPod app (or whatever may have info there). In addition, just to see if I could find out the problem, I've tried retrieving the current information on app launch: NSDictionary *info = [[MPNowPlayingInfoCenter defaultCenter] nowPlayingInfo]; NSString *title = [info valueForKey:MPMediaItemPropertyTitle]; NSString *author = [info valueForKey:MPMediaItemPropertyArtist]; NSLog(@"Currently playing: %@ // %@", title, author); and I get Currently playing: (null) // (null) I've researched this quite a bit and the following articles explain it pretty thoroughly, however, I am still unable to get this working properly. Am I missing something? Would there be anything interfering with this? Is this a service something my app needs to register to access (didn't see this in any docs)? Apple's Docs Change lock screen background audio controls Now playing info ignored UPDATE: I've created a tutorial outlining how I was able to get this functioning properly: http://jaysonlane.net/tech-blog/2012/04/lock-screen-now-playing-with-mpnowplayinginfocenter/

    Read the article

  • Cannot load rJava because cannot load a shared library

    - by Farrel
    I have been struggling to load the rJava package in R. I get the following messages > library(rJava) Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared library \ 'C:/PROGRA~1/R/R-210~1.1/library/rJava/libs/rJava.dll': LoadLibrary failure: The specified module could not be found. Error : .onLoad failed in 'loadNamespace' for 'rJava' Error: package/namespace load failed for 'rJava' I have tried so many solutions that they are all bamboozeled in my head. At some point I even got > R Console: Rgui.exe - System Error The > program can't start because > MSVCR71.dll is is missing from your > computer. Try reinstalling the program > to fix this problem. I made sure everything I could think of was on the path > C:\Program Files\R\Rtools\bin;C:\Program Files\R\Rtools\perl\bin; C:\Program Files\R\Rtools\MinGW\bin;%SystemRoot%\system32; %SystemRoot%;%SystemRoot%\System32\Wbem; %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\; C:\Program Files\QuickTime\QTSystem\; C:\Program Files\R\R-2.10.1\library\rJava\libs\; C:\Program Files\R;C:\Program Files\Java\jre6\bin\client What should I try next? I am running R version 2.10.1 (2009-12-14) and I have also tried R version 2.10.1 Patched (2010-03-03 r51210). It is on a Windows machine running windows 7 enterprise

    Read the article

  • Programmatically Set a QueryStringFilterWebPart / ExcelWebRenderer Connection

    - by user355972
    Code: public static void ChartPageConnector(SPWeb web, string pageURL, string providerWpId, string consumerWpId, string mappedName) { SPFile file = web.Files[pageURL]; SPLimitedWebPartManager mgr = file.GetLimitedWebPartManager(PersonalizationScope.Shared); TransformableFilterValuesToFilterValuesTransformer transformer = new TransformableFilterValuesToFilterValuesTransformer(); transformer.MappedConsumerParameterName = mappedName; //QueryStringFilterWebPart provider = (QueryStringFilterWebPart)mgr.WebParts[providerWpId]; //ExcelWebRenderer consumer = (ExcelWebRenderer)mgr.WebParts[consumerWpId]; JWP.WebPart provider = mgr.WebParts[providerWpId]; JWP.WebPart consumer = mgr.WebParts[consumerWpId]; ProviderConnectionPoint pcp = null; foreach (ProviderConnectionPoint ppoint in mgr.GetProviderConnectionPoints(provider)) { if (ppoint.InterfaceType == typeof(Microsoft.SharePoint.WebPartPages.ITransformableFilterValues)) { pcp = ppoint; break; } } ConsumerConnectionPoint ccp = null; foreach (ConsumerConnectionPoint cpoint in mgr.GetConsumerConnectionPoints(consumer)) { if (cpoint.InterfaceType == typeof(IFilterValues)) { ccp = cpoint; break; } } //mgr.SPConnectWebParts(provider, pcp, consumer, ccp); mgr.SPConnectWebParts(provider, pcp, consumer, ccp, transformer); file.Update(); web.Update(); } Error: The connection point "IFilterValues" on "g_33d68e82_6478_4629_a079_5a7e02ac4695" is disabled. Any Ideas Why?

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Uploadify & Integrated Windows Authentication

    - by vdh_ant
    Hi guys I'm running into an issue with Uploadify and I hope someone can help. I have put Uploadify into my app and all works fine in dev (using the VS web server). All worked fine and checked until I deployed the app into my test environment which uses Integrated Windows Authentication. When I actually go to upload the file, the browser brings up a login prompt. At this point, even if you type in the correct username and password, the request seems not to complete and even if you tell the browser to remember the password it still brings up the login prompt. When this started to occur, I decided to spin up Fiddler and see what was going on. But guess what, when ever Fiddler is running the issue doesn't occur. Unfortunately I can't make running Fiddler a reuqierment for running the app. Hence does anyone have any ideas. I know there are some issues with Uploadify/flash when using forms authentication but I didn't think they carried across to Integrated Windows Authentication. Cheers Anthony

    Read the article

  • How to bind a datatable to a wpf editable combobox: selectedItem showing System.Data.DataRowView

    - by black sensei
    Hello Good People!! I bound a datatable to a combobox and defined a dataTemplate in the itemTemplate.i can see desired values in the combobox dropdown list,what i see in the selectedItem is System.Data.DataRowView here are my codes: <ComboBox Margin="128,139,123,0" Name="cmbEmail" Height="23" VerticalAlignment="Top" TabIndex="1" ToolTip="enter the email you signed up with here" IsEditable="True" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=username}"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> The code behind is so : if (con != null) { con.Open(); //users table has columns id | username | pass SQLiteCommand cmd = new SQLiteCommand("select * from users", con); SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); userdt = new DataTable("users"); da.Fill(userdt); cmbEmail.DataContext = userdt; } I've been looking for something like SelectedValueTemplate or SelectedItemTemplate to do the same kind of data templating but i found none. I'll like to ask if i'm doing something wrong or it's a known issue for combobox binding? if something is wrong in my code please point me to the right direction. thanks for reading this

    Read the article

  • Unable to load UIView with initWithNibName in Apple SDK 3.1.3

    - by James Foster
    I am trying to load my UIViewController and corresponding UIView programmatically in the AppDelegate class. I have the following in the applicationDidFinishLaunchingMethod of the AppDelegate class: (void)applicationDidFinishLaunching:(UIApplication *)application { NSLog(@"--- AppDelegate applicationDidFinishLaunching Start"); // Override point for customization after application launch //MainController *controller = [[MainController alloc] initWithNibName:@"MainView" bundle:nil]; MainController2 *controller = [[MainController2 alloc] initWithNibName:@"MainView2" bundle:nil]; if (controller.view == nil) { NSLog(@"--- controller view is nil!!!!!!"); } [window addSubview:controller.view]; [window makeKeyAndVisible]; NSLog(@"--- AppDelegate applicationDidFinishLaunching End"); } Basically the view in the viewController doesn't load and when the application launches, it just shows the blank window. What is funny is that it worked before and then just stopped working. I am wondering if this is a bug in iPhone SDK 3.1.3??? This is a really annoying issue, and I was quite a ways along in a new project when I started having this problem and had to start over with a blank project and copy over all of my resources, when it started happening again... I have uninstalled iPhone OS 3.1.3 and reinstalled and the problem prevails... I also created a second UIViewController class and corresponding nib which DOES LOAD just fine... I am not sure why one works and the other doesn't it... You can download a sample project which demonstrates this issue at the following link: http://www.mediafire.com/?nmhnmhbeyki To switch back and forth between the working/nonworking UIViewController and UIView simply comment comment/comment out the initWithNibLine lines in the AppDelegate and the corresponding #import "MainController.h" statements in the appdelegate.h file... Any ideas??? The sample project I have linked to isolates the problem in as few files/lines of code as possible... I appreciate any help you might be able to provide. Thanks, James

    Read the article

  • Python, unit test - Pass command line arguments to setUp of unittest.TestCase

    - by sberry2A
    I have a script that acts as a wrapper for some unit tests written using the Python unittest module. In addition to cleaning up some files, creating an output stream and generating some code, it loads test cases into a suite using unittest.TestLoader().loadTestsFromTestCase() I am already using optparse to pull out several command-line arguments used for determining the output location, whether to regenerate code and whether to do some clean up. I also want to pass a configuration variable, namely an endpoint URI, for use within the test cases. I realize I can add an OptionParser to the setUp method of the TestCase, but I want to instead pass the option to setUp. Is this possible using loadTestsFromTestCase()? I can iterate over the returned TestSuite's TestCases, but can I manually call setUp on the TestCases? ** EDIT ** I wanted to point out that I am able to pass the arguments to setUp if I iterate over the tests and call setUp manually like: (options, args) = op.parse_args() suite = unittest.TestLoader().loadTestsFromTestCase(MyTests.TestSOAPFunctions) for test in suite: test.setUp(options.soap_uri) However, I am using xmlrunner for this and its run method takes a TestSuite as an argument. I assume it will run the setUp method itself, so I would need the parameters available within the XMLTestRunner. I hope this makes sense.

    Read the article

  • Spring AOP AfterThrowing vs. Around Advice

    - by whiskerz
    Hey there, when trying to implement an Aspect, that is responsible for catching and logging a certain type of error, I initially thought this would be possible using the AfterThrowing advice. However it seems that his advice doesn't catch the exception, but just provides an additional entry point to do something with the exception. The only advice which would also catch the exception in question would then be an AroundAdvice - either that or I did something wrong. Can anyone assert that indeed if I want to catch the exception I have to use an AroundAdvice? The configuration I used follows: @Pointcut("execution(* test.simple.OtherService.print*(..))") public void printOperation() {} @AfterThrowing(pointcut="printOperation()", throwing="exception") public void logException(Throwable exception) { System.out.println(exception.getMessage()); } @Around("printOperation()") public void swallowException(ProceedingJoinPoint pjp) throws Throwable { try { pjp.proceed(); } catch (Throwable exception) { System.out.println(exception.getMessage()); } } Note that in this example I caught all Exceptions, because it just is an example. I know its bad practice to just swallow all exceptions, but for my current use case I want one special type of exception to be just logged while avoiding duplicate logging logic.

    Read the article

  • What are some good/reputable/widely-used libraries written in VB.NET?

    - by Dan Tao
    Generally speaking, when VB.NET and C# are compared, there is a lot of strong support for C#, accompanied by some bashing of VB.NET until a respected developer comes along and acts as The Voice Of Reason, pointing out that while VB prior to VB.NET had its fair share of issues, VB.NET is really a very strong, fully OOP language that is, feature-wise, right about on par with C# (with the exception of certain things like a full-bodied lamba syntax [pre-VB10] or the yield keyword, as many C# faithfuls are quick to point out). I myself, having written plenty of code in both VB.NET and C#, fall squarely in the "I prefer C#, but don't consider VB.NET any less of a language" camp. However, one thing I have noticed is that when it comes to respected and/or widely-used libraries for .NET, everyting is written in C#. Or at least that's been my impression. This strikes me as a little strange because, aside from the abovementioned sprinkling of nice features (in particular the yield keyword), I tend to view the VB.NET/C# divide as primarily a matter of personal taste. Obviously, plenty of developers prefer C#. But I personally know some developers (good ones) who prefer VB.NET, which would lead me to suspect that surely some libraries (good ones) would be written in VB.NET. They must be out there, and I just haven't found them. What are some good libraries that've been written in VB.NET? The best would be open source, as that would allow interested developers to take a look at some good VB.NET code and see how effective the language can be when used properly. But I'd be interested to know about any libraries at all, particularly reputable ones.

    Read the article

  • PDF hyperlinks on iPhone/iPad

    - by RoLYroLLs
    Hi, I've been looking around Google and SO and haven't quite found an answer to my question, or at least a more recent answer. I have a PDF with hyperlinks/hotspots in it and would like to display the PDF file in my own iPhone/iPad app. When the user clicks on a hyperlink/hotspot I would like the user to be taken to the appropriate location of the link (whether another page on the PDF or a webpage outside of the app). I have found many questions like this on here, but most dated over 6 moths ago. While that might not be so long ago, it kind of is in-spite of newer technologies and the probability of someone comping up with new code/way to do it. I looked into the QuartzDemo sample app and edited the PDF to have a hotspot and it does not work. Maybe the ability is there, but not implemented? I have found one app that DOES work great! The GoodReader app displays my PDF and allows the clicking of hotspots in my PDF. However, I'd like this implemented in my own app. So, has anyone been playing around with this? Anyone find a solution? Can anyone point others in a direction? Thanks for your time.

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >