Search Results

Search found 217 results on 9 pages for 'faults'.

Page 7/9 | < Previous Page | 3 4 5 6 7 8 9  | Next Page >

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

  • Thread-local storage segfaults on NetBSD only?

    - by bortzmeyer
    Trying to run a C++ program, I get segmentation faults which appear to be specific to NetBSD. Bert Hubert wrote the simple test program (at the end of this message) and, indeed, it crashes only on NetBSD. % uname -a NetBSD golgoth 5.0.1 NetBSD 5.0.1 (GENERIC) #0: Thu Oct 1 15:46:16 CEST 2009 +stephane@golgoth:/usr/obj/sys/arch/i386/compile/GENERIC i386 % g++ --version g++ (GCC) 4.1.3 20080704 prerelease (NetBSD nb2 20081120) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. % gdb thread-local-storage-powerdns GNU gdb 6.5 Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386--netbsdelf"... (gdb) run Starting program: /home/stephane/Programmation/C++/essais/thread-local-storage-powerdns Program received signal SIGSEGV, Segmentation fault. 0x0804881b in main () at thread-local-storage-powerdns.cc:20 20 t_a = new Bogo('a'); (gdb) On other Unix, it works fine. Is there a known issue in NetBSD with C++ thread-local storage? #include <stdio.h> class Bogo { public: explicit Bogo(char a) { d_a = a; } char d_a; }; __thread Bogo* t_a; int main() { t_a = new Bogo('a'); Bogo* b = t_a; printf("%c\n", b->d_a); }

    Read the article

  • seg fault at the end of program after executing everything?

    - by Fantastic Fourier
    Hello all, I wrote a quick program which executes every statement before giving a seg fault error. struct foo { int cat; int * dog; }; void bar (void * arg) { printf("o hello bar\n"); struct foo * food = (struct foo *) arg; printf("cat meows %i\n", food->cat); printf("dog barks %i\n", *(food->dog)); } void main() { int cat = 4; int * dog; dog = &cat; printf("cat meows %i\n", cat); printf("dog barks %i\n", *dog); struct foo * food; food->cat = cat; food->dog = dog; printf("cat meows %i\n", food->cat); printf("dog barks %i\n", *(food->dog)); printf("time for foo!\n"); bar(food); printf("begone!\n"); cat = 5; printf("cat meows %i\n", cat); printf("dog barks %i\n", *dog); // return 0; } which gives a result of cat meows 4 dog barks 4 cat meows 4 dog barks 4 time for foo! o hello bar cat meows 4 dog barks 4 begone! cat meows 5 dog barks 5 Segmentation fault (core dumped) I'm not really sure why it seg faults at the end? Any comments/insights are deeply appreciated.

    Read the article

  • WCF Fails when using impersonation over 2 machine boundaries (3 machines)

    - by MrTortoise
    These scenarios work in their pieces. Its when i put it all together that it breaks. I have a WCF service using netTCP that uses impersonation to get the callers ID (role based security will be used at this level) on top of this is a WCF service using basicHTTP with TransportCredientialOnly which also uses impersonation I then have a client front end that connects to the basicHttp. the aim of the game is to return the clients username from the netTCP service at the bottom - so ultimatley i can use role based security here. each service is on a different machine - and each service works when you remove any calls they make to other services when you run a client for them both locally and remotley. IE the problem only manifests when you jump accross more than one machine boundary. IE the setup breaks when i connect each part together - but they work fine on their own. I also specify [OperationBehavior(Impersonation = ImpersonationOption.Required)] in the method and have IIS setup to only allow windows authentication (actually i have ananymous enabled still, but disabling makes no difference) This impersonation works fine in the scenario where i have a netTCP Service on Machine A with a client with a basicHttp service on machine B with a clinet for the basicHttp service also on machine B ... however if i move that client to any machine C i get the following error: The exception is 'The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:10:00'' the inner message is 'An existing connection was forcibly closed by the remote host' Am beginning to think this is more a network issue than config ... but then im grasping at straws ... the config files are as follows (heading from the client down to the netTCP layer) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="basicHttpBindingEndpoint" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:10:00" sendTimeout="00:02:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://panrelease01/WCFTopWindowsTest/Service1.svc" binding="basicHttpBinding" bindingConfiguration="basicHttpBindingEndpoint" contract="ServiceReference1.IService1" name="basicHttpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour" /> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> the service for the client (basicHttp service and the client for the netTCP service) <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingEndpoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> <basicHttpBinding> <binding name="basicHttpWindows"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows"></transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="net.tcp://5d2x23j.panint.com/netTCPwindows/Service1.svc" binding="netTcpBinding" bindingConfiguration="netTcpBindingEndpoint" contract="ServiceReference1.IService1" name="netTcpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation" allowNtlm="true"/> </clientCredentials> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="WCFTopWindowsTest.Service1" behaviorConfiguration="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="basicHttpWindows" name ="basicHttpBindingEndpoint" contract ="WCFTopWindowsTest.IService1"> </endpoint> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration> then finally the service for the netTCP layer <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> <authorization> <allow roles="*"/> </authorization> <compilation debug="true" targetFramework="4.0" /> <identity impersonate="true" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTCPwindows"> <security mode="Transport"> <transport clientCredentialType="Windows"></transport> </security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="netTCPwindows.netTCPwindowsBehaviour" name="netTCPwindows.Service1"> <endpoint address="" bindingConfiguration="netTCPwindows" binding="netTcpBinding" name="netTcpBindingEndpoint" contract="netTCPwindows.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mextcp" binding="mexTcpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8721/test2" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="netTCPwindows.netTCPwindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="false" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration>

    Read the article

  • Batch faulting in a to-many relationship for a collection of objects

    - by indragie
    Scenario: Let's say I have an entity called Author that has a to-many relationship called books to the Book entity (inverse relationship author). If I have an existing collection of Author objects, I want to fault in the books relationship for all of them in a single fetch request. Code This is what I've tried so far: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Book"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"author IN %@", authors]; Executing this fetch request does not result in the books relationship of the objects in the authors array being faulted in (inspected via logging). I've also tried doing the fetch request the other way around: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Author"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"SELF IN %@", authors]; fetchRequest.relationshipKeypathsForPrefetching = @[@"books"]; This doesn't fire the faults either. What's the appropriate way of going about doing this?

    Read the article

  • C# functional quicksort is failing

    - by Rubys
    I'm trying to implement quicksort in a functional style using C# using linq, and this code randomly works/doesn't work, and I can't figure out why. Important to mention: When I call this on an array or list, it works fine. But on an unknown-what-it-really-is IEnumerable, it goes insane (loses values or crashes, usually. sometimes works.) The code: public static IEnumerable<T> Quicksort<T>(this IEnumerable<T> source) where T : IComparable<T> { if (!source.Any()) yield break; var pivot = source.First(); var sortedQuery = source.Skip(1).Where(a => a.CompareTo(source.First()) <= 0).Quicksort() .Concat(new[] { pivot }) .Concat(source.Skip(1).Where(a => a.CompareTo(source.First()) > 0).Quicksort()); foreach (T key in sortedQuery) yield return key; } Can you find any faults here that would cause this to fail?

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • OpenMP in Fortran

    - by user345293
    I very rarely use fortran, however I have been tasked with taking legacy code rewriting it to run in parallel. I'm using gfortran for my compiler choice. I found some excellent resources at https://computing.llnl.gov/tutorials/openMP/ as well as a few others. My problem is this, before I add any OpenMP directives, if I simply compile the legacy program: gfortran Example1.F90 -o Example1 everything works, but turning on the openmp compiler option even without adding directives: gfortran -openmp Example1.F90 -o Example1 ends up with a Segmentation fault when I run the legacy program. Using smaller test programs that I wrote, I've successfully compiled other programs with -openmp that run on multiple threads, but I'm rather at a loss why enabling the option alone and no directives is resulting in a seg fault. I apologize if my question is rather simple. I could post code but it is rather long. It faults as I assign initial values: REAL, DIMENSION(da,da) :: uconsold REAL, DIMENSION(da,da,dr,dk) :: uconsolde ... uconsold=0.0 uconsolde=0.0 The first assignment to "uconsold" works fine, the second seems to be the source of the fault as when I comment the line out the next several lines execute merrily until "uconsolde" is used again. Thank you for any help in this matter.

    Read the article

  • C Programming. How to deep copy a struct?

    - by user69514
    I have the following two structs where "child struct" has a "rusage struct" as an element. Then I create two structs of type "child" let's call them childA and childB How do I copy just the rusage struct from childA to childB? typedef struct{ int numb; char *name; pid_t pid; long userT; long systemT; struct rusage usage; }child; typedef struct{ struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims */ long ru_majflt; /* page faults */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* messages sent */ long ru_msgrcv; /* messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */ }rusage; I did the following, but I guess it copies the memory location, because if I changed the value of usage in childA, it also changes in childB. memcpy(&childA,&childB, sizeof(rusage)); I know that gives childB all the values from childA. I have already taken care of the others fields in childB, I just need to be able to copy the rusage struct called usage that resides in the "child" struct.

    Read the article

  • C program - Seg fault, cause of

    - by resonant_fractal
    Running this gives me a seg fault (gcc filename.c -lm), when i enter 6 (int) as a value. Please help me get my head around this. The intended functionality has not yet been implemented, but I need to know why I'm headed into seg faults already. Thanks! #include<stdio.h> #include<math.h> int main (void) { int l = 5; int n, i, tmp, index; char * s[] = {"Sheldon", "Leonard", "Penny", "Raj", "Howard"}; scanf("%d", &n); //Solve Sigma(Ai*2^(i-1)) = (n - k)/l if (n/l <= 1) printf("%s\n", s[n-1]); else { tmp = n; for (i = 1;;) { tmp = tmp - (l * pow(2,i-1)); if (tmp <= 5) { // printf("Breaking\n"); break; } ++i; } printf("Last index = %d\n", i); // ***NOTE*** //Value lies in next array, therefore ++i; index = tmp + pow(2, n-1); printf("%d\n", index); } return 0; }

    Read the article

  • Linux apache developing configuration

    - by Jeffrey Vandenborne
    Recenly reinstalled my system, and came to a point where I need apache and php. I've been searching a long time, but I can't figure out how to configure apache the best way for a developer computer. The plan is simple, I want to install apache 2 + mysql server so I can develop some php website. I don't want to install lamp though, just the apache2, php5 and mysql. The problem that I've been looking an answer for is the permissions on the /var/www/ folder. I've tried making it my folder using the chown command, followed by a chmod -R 755 /var/www. Most things work then, but fwrite for example won't work, because I need to give write permissions to everyone, unless I change my global umask to 000 I'm not sure what I can do. In short: I want to install apache2, php5, mysql-server without using lamp, but configured in a way so I can open up netbeans, start a project with root in /var/www/, and run every single function without permission faults. Does anyone have experiences or workarounds to this? Extra: OS: Ubuntu 10.04 ARCH: x86_64

    Read the article

  • Question on passing a pointer to a structure in C to a function?

    - by worlds-apart89
    Below, I wrote a primitive singly linked list in C. Function "addEditNode" MUST receive a pointer by value, which, I am guessing, means we can edit the data of the pointer but can not point it to something else. If I allocate memory using malloc in "addEditNode", when the function returns, can I see the contents of first-next ? Second question is do I have to free first-next or is it only first that I should free? I am running into segmentation faults on Linux. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { int value; list_node_t *next; }; void addEditNode(list_node_t *node) { node->value = 10; node->next = (list_node_t*) malloc(sizeof(list_node_t)); node->next->value = 1; node->next->next = NULL; } int main() { list_node_t *first = (list_node_t*) malloc(sizeof(list_node_t)); first->value = 1; first->next = NULL; addEditNode(first); free(first); return 0; }

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • Things to Avoid in C/C++ [closed]

    - by piemesons
    Possible Duplicate: What C++ pitfalls should I avoid ? While searching for some information, I stumbled upon this series of small articles, Things to avoid in C/C++. So, thought of sharing it... "C/C++ programmers are allowed to do some things they shouldn't. We are given functions that are supposed to be useful but aren't because of hidden faults, or taught ways to do things that are bad, wrong, not necessary. These posts will discuss many of these as time goes on." gets(): http://www.gidnetwork.com/b-56.html fflush(stdin): http://www.gidnetwork.com/b-57.html feof(): http://www.gidnetwork.com/b-58.html system("PAUSE"): http://www.gidnetwork.com/b-61.html scanf: http://www.gidnetwork.com/b-59.html scanf / character: http://www.gidnetwork.com/b-60.html scanf / string: http://www.gidnetwork.com/b-62.html scanf / number: http://www.gidnetwork.com/b-63.html scanf / epilogue: http://www.gidnetwork.com/b-64.html void main(): http://www.gidnetwork.com/b-66.html As this is a very useful subject/topic, I request all the members to keep adding valuable information to this thread, and make it a good source of information for all level of programmers, especially for beginners. Thanks...

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The JRockit Performance Counters

    - by Marcus Hirt
    Every now and then I get a question regarding what the attributes in the PerfCounters dynamic MBean represent. Now, all the MBeans under the oracle.jrockit.management (bea.jrockit.management pre R28) domain are part of what we call JMXMAPI (the JRockit JMX based Management API), which is unsupported. Therefore there is no official documentation for the API. I did however write a bit about JMXMAPI in my recent JRockit book, Oracle JRockit: The Definitive Guide. The information in the table below is from that book: Counter Description java.cls.loadedClasses The number of classes loaded since the start of the JVM. java.cls.unloadedClasses The number of classes unloaded since the start of the JVM. java.property.java.class.path The class path of the JVM. java.property.java.endorsed.dirs The endorsed dirs. See the Endorsed Standards Override Mechanism. java.property.java.ext.dirs The ext dirs, which are searched for jars that should be automatically put on the classpath. See the Java documentation for java.ext.dirs. java.property.java.home The root of the JDK or JRE installation. java.property.java.library.path The library path used to find user libraries. java.property.java.vm.version The JRockit version. java.rt.vmArgs The list of VM arguments. java.threads.daemon The number of running daemon threads. java.threads.live The total number of running threads. java.threads.livePeak The peak number of threads that has been running since JRockit was started. java.threads.nonDaemon The number of non-daemon threads running. java.threads.started The total number of threads started since the start of JRockit. jrockit.gc.latest.heapSize The current heap size in bytes. jrockit.gc.latest.nurserySize The current nursery size in bytes. jrockit.gc.latest.oc.compaction.time How long, in ticks, the last compaction lasted. Reset to 0 if compaction is skipped. jrockit.gc.latest.oc.heapUsedAfter Used heap at the end of the last OC, in bytes. jrockit.gc.latest.oc.heapUsedBefore Used heap at the start of the last OC, in bytes. jrockit.gc.latest.oc.number The number of OCs that have occurred so far. jrockit.gc.latest.oc.sumOfPauses The paused time for the last OC, in ticks. jrockit.gc.latest.oc.time The time the last OC took, in ticks. jrockit.gc.latest.yc.sumOfPauses The paused time for the last YC, in ticks. jrockit.gc.latest.yc.time The time the last YC took, in ticks. jrockit.gc.max.oc.individualPause The longest OC pause so far, in ticks. jrockit.gc.max.yc.individualPause The longest YC pause so far, in ticks. jrockit.gc.total.oc.compaction.externalAborted Number of aborted external compactions so far. jrockit.gc.total.oc.compaction.internalAborted Number of aborted internal compactions so far. jrockit.gc.total.oc.compaction.internalSkipped Number of skipped internal compactions so far. jrockit.gc.total.oc.compaction.time The total time spent doing compaction so far, in ticks. jrockit.gc.total.oc.ompaction.externalSkipped Number of skipped external compactions so far. jrockit.gc.total.oc.pauseTime The sum of all OC pause times so far, in ticks. jrockit.gc.total.oc.time The total time spent doing OC so far, in ticks. jrockit.gc.total.pageFaults The number of page faults that have occurred during GC so far. jrockit.gc.total.yc.pauseTime The sum of all YC pause times, in ticks. jrockit.gc.total.yc.promotedObjects The number of objects that all YCs have promoted. jrockit.gc.total.yc.promotedSize The total number of bytes that all YCs have promoted, in bytes. jrockit.gc.total.yc.time The total time spent doing YC, in ticks. oracle.ci.jit.count The number of methods JIT compiled. oracle.ci.jit.timeTotal The total time spent JIT compiling, in ticks. oracle.ci.opt.count The number of methods optimized. oracle.ci.opt.timeTotal The total time spent optimizing, in ticks. oracle.rt.counterFrequency Used to convert ticks values to seconds. Note that many of these counters are excellent choices for attributes to plot in the Management Console. Also note that many values are in ticks – to convert them to seconds, divide by the value in the oracle.rt.counterFrequency counter.

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Access Control Service: Handling Errors

    - by Your DisplayName here!
    Another common problem with external authentication is how to deal with sign in errors. In active federation like WS-Trust there are well defined SOAP faults to communicate problem to a client. But with web applications, the error information is typically generated and displayed on the external sign in page. The relying party does not know about the error, nor can it help the user in any way. The Access Control Service allows to post sign in errors to a specified page. You setup this page in the relying party registration. That means that whenever an error occurs in ACS, the error information gets packaged up as a JSON string and posted to the page specified. This way you get structued error information back into you application so you can display a friendlier error message or log the error. I added error page support to my ACS2 sample, which can be downloaded here. How to turn the JSON error into CLR types The JSON schema is reasonably simple, the following class turns the JSON into an object: [DataContract] public class AcsErrorResponse {     [DataMember(Name = "context", Order = 1)]     public string Context { get; set; }     [DataMember(Name = "httpReturnCode", Order = 2)]     public string HttpReturnCode { get; set; }     [DataMember(Name = "identityProvider", Order = 3)]        public string IdentityProvider { get; set; }     [DataMember(Name = "timeStamp", Order = 4)]     public string TimeStamp { get; set; }     [DataMember(Name = "traceId", Order = 5)]     public string TraceId { get; set; }     [DataMember(Name = "errors", Order = 6)]     public List<AcsError> Errors { get; set; }     public static AcsErrorResponse Read(string json)     {         var serializer = new DataContractJsonSerializer( typeof(AcsErrorResponse));         var response = serializer.ReadObject( new MemoryStream(Encoding.Default.GetBytes(json))) as AcsErrorResponse;         if (response != null)         {             return response;         }         else         {             throw new ArgumentException("json");         }     } } [DataContract] public class AcsError {     [DataMember(Name = "errorCode", Order = 1)]     public string Code { get; set; }             [DataMember(Name = "errorMessage", Order = 2)]     public string Message { get; set; } } Retrieving the error information You then need to provide a page that takes the POST and deserializes the information. My sample simply fills a view that shows all information. But that’s for diagnostic/sample purposes only. You shouldn’t show the real errors to your end users. public class SignInErrorController : Controller {     [HttpPost]     public ActionResult Index()     {         var errorDetails = Request.Form["ErrorDetails"];         var response = AcsErrorResponse.Read(errorDetails);         return View("SignInError", response);     } } Also keep in mind that the error page is an anonymous page and that you are taking external input. So all the usual input validation applies.

    Read the article

< Previous Page | 3 4 5 6 7 8 9  | Next Page >