Search Results

Search found 23427 results on 938 pages for 'christopher done'.

Page 413/938 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Optimal communication pattern to update subscribers

    - by hpc
    What is the optimal way to update the subscriber's local model on changes C on a central model M? ( M + C - M_c) The update can be done by the following methods: Publish the updated model M_c to all subscribers. Drawback: if the model is big in contrast to the change it results in much more data to be communicated. Publish change C to all subscribes. The subscribers will then update their local model in the same way as the server does. Drawback: The client needs to know the business logic to update the model in the same way as the server. It must be assured that the subscribed model stays equal to the central model. Calculate the delta (or patch) of the change (M_c - M = D_c) and transfer the delta. Drawback: This requires that calculating and applying the delta (M + D_c = M_c) is an cheap/easy operation. If a client newly subscribes it must be initialized. This involves sending the current model M. So method 1 is always required. Think of playing chess as a concrete example: Subscribers send moves and want to see the latest chess board state. The server checks validity of the move and applies it to the chess board. The server can then send the updated chessboard (method 1) or just send the move (method 2) or send the delta (method 3): remove piece on field D4, put tower on field D8.

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • How to schedule time-of-day upgrades

    - by Richard
    Hello, I'm responsible for about 30 Ubuntu computers at a private K-8 school. We have only a 3Mbps internet connection serving the entire campus, and I would like to ensure that updates are done in the middle of the night - so that daytime tasks are not slowed down. I'm using Ubuntu 10.04, and have set all computers to download and install security updates via the update manager. I have also installed cron-apt, and modified the config file to stagger the start times of the upgrades from about 10pm to 4am local time. HOWEVER - this morning I arrived at the school at 7:30am and all the computers were busy downloading a large security based update. Needless to say, all internet activity was slowed to a crawl (for the next 2 hours), and the computer users were very very upset. This was the event I'm trying so hard to prevent. It seems that my scheme to ensure middle of the night downloads failed, and I'm not sure why. I've also tried some schemes using unattended-upgrades & crontab, but there always seemed to be something scheduling upgrades to occur in addition to the ones I try to force at middle of the night. Is there a sure fire way to absolutely positively guarantee that updates will occur only at one specific time? It would be nice if the update manager just had a drop down menu to specify a designated time. Thanks in advance for any help you can give me.

    Read the article

  • How to kill all screens that has been up longer then 3 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Why does PsExec hang after successfully running a powershell script?

    - by Matt
    The script is fairly straight forward. Simply tries to start a bunch of windows services. Execution locally works fine when on the target machine. The script is actually executing fine as well when done via PsExec, it just never returns until I hit the "enter" key on my CMD prompt. This is a problem, because this is being called from TeamCity, and it makes the Agent hang waiting for PsExec to return. I've tried the following: Adding an exit and exit 0 at the end of the Powershell script Adding a < NUL to the end of the PsExec call, per the answer in this SF question Adding a > stdout redirect This is how I am actually calling psexec: psexec \\target -u domain\username -p password powershell c:\path\script.ps1 No matter what I do, it hangs until I the locally on the cmd prompt. After I hit enter, I get the message: powershell exited on target with error code 0.

    Read the article

  • setup PXE (i.e. DHCP + TFTP) server on MacOSX

    - by Albert
    What is the easiest way to setup an PXE server (i.e. a DHCP server + TFTP server) on MacOSX? Is there maybe some easy-to-use tool which just comes with both servers builtin? I have done that in the past but I remember that it took me several hours of editing some config files of tftpd (I think) and different versions of dhcpd, many trials and errors until I got it working (mostly). Now I have a fresh MacOSX installation and I want to avoid any complicated setup.

    Read the article

  • WIth more mobile users, my geo ip database is becoming useless.

    - by Marius
    Hello there, I've been enjoying the benefits of Geo IP lookup from database for some time. Its great. People are increasingly trying to access my site from a mobile phones or 3G modems, and their physical location seems to have little relation to whereabouts my IP lookup tells me they are. A user who is on the east cost of my country, may be looked up as being in the far inland, or up north. And one user may be reported as being in one location in one moment, and seconds later, be 100s of kilometers away. This is becoming a problem, and I need to find a solution. I am already updating my database monthly, but it has little effect. What can be done? Thank you for your time. Kind regardsMarius

    Read the article

  • How do I make a privileged port non-privileged in Redhat 5?

    - by Jason Thompson
    So I have a RedHat 5 box that I'm wanting to run an application that I wrote that implements SLP. SLP uses port 427 for answering service queries. My understanding is that ports below 1024 are "privileged" and thus cannot be bound to by anyone that's not root. I cannot run this application as root as it is launched via tomcat. One creative solution I really like was simply writing an iptables rule to route the privileged port to a non-privileged. In my proof of concept tests, this works wonderfully. Unfortunately, it would be greatly (and understandably) desired by the powers if my application did not require screwing around with iptables upon installation. So I heard a rumor and cannot find anything to verify this that there was some sort of command or parameter that could be set to make any port I want be non-privileged. Is this true? If so, how is this done? Thanks!

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • What type of pattern would be used in this case

    - by Admiral Kunkka
    I want to know how to tackle this type of scenario. We are building a person's background, from scratch, and I want to know, conceptually, how to proceed with a secure object pattern in both design and execution... I've been reading on Factory patterns, Model-View-Controller types, Dependency injection, Singleton approaches... and I can't seem to grasp or 'fit' these types of designs decisions into what I'm trying to do.. First and foremost, I started with having a big jack-of-all-trades class, then I read some more, and some tips were to make sure your classes only have a single purpose.. which makes sense and I started breaking down certain things into other classes. Okay, cool. Now I'm looking at dependency injection and kind of didn't really know what's going on. Example/insight of what kind of heirarchy I need to accomplish... class Person needs to access and build from a multitude of different classes. class Culture needs to access a sub-class for culture benefits class Social needs to access class Culture, and other sub-classes class Birth needs to access Social, Culture, and other sub-classes class Childhood/Adolescence/Adulthood need to access everything. Also, depending on different rolls, this class heirarchy needs to create multiple people as well, such as Family, and their backgrounds using some, if not all, of these same classes. Think of it as a people generator, all random, with backgrounds and things that happen to them. Ageing, death of loved ones, military careers, e.t.c. Most of the generation is done randomly, making calls to a mt_rand function to pick from most of the selections inside the classes, guaranteeing the data to be absolutely random. I have most of the bulk-data down, and was looking for some insight from fellow programmers, what do you think?

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • Can I write an Excel macro to find product info based on a SKU?

    - by GorillaSandwich
    My coworker wants to create an invoice template in Excel 2007. In column 1, he wants to be able to put in a SKU like '000293954'[1], and when he hits tab, have the other columns fill in a matching description and price. There would be a bunch of different SKUs and information. Has anybody done this type of thing with a macro before? Any advice? (I have programming experience with Javascript, PHP, and Ruby, but have never written a macro.) [1] The input wouldn't be typed - he'd use a wedge barcode scanner that inputs just like it was typed. Not that it matters for this question.

    Read the article

  • How to get Bash shell history range

    - by Aniti
    How can I get/filter history entries in a specific range? I have a large history file and frequently use history | grep somecommand Now, my memory is pretty bad and I also want to see what else I did around the time I entered the command. For now I do this: get match, say 4992 somecommand, then I do history | grep 49[0-9][0-9] this is usually good enough, but I would much rather do it more precisely, that is see commands from 4972 to 5012, that is 20 commands before and 20 after. I am wondering if there is an easier way? I suspect, a custom script is in order, but perhaps someone else has done something similar before.

    Read the article

  • Linux (non-transparent) per-process hugepage accounting

    - by Dan Pritts
    I've recently converted some java apps to run with linux manually-configured hugepages. I've got about 10 tomcats running on a system and I am interested in knowing how much memory each one is using. I can get summary information out of /proc/meminfo as described in Linux Huge Pages Usage Accounting. But I can't find any tools that tell me about the actual per-process hugepage usage. I poked around in /proc/pid/numa_stat and found some interesting information that led me to this grossity: function pshugepage () { HUGEPAGECOUNT=0 for num in `grep 'anon_hugepage.*dirty=' /proc/$@/numa_maps | awk '{print $6}' | sed 's/dirty=//'` ; do HUGEPAGECOUNT=$((HUGEPAGECOUNT+num)) done echo process $@ using $HUGEPAGECOUNT huge pages } The numbers it gives me are plausible, but i'm far from confident this method is correct. Environment is a quad-CPU dell, 64GB ram, RHEL6.3, oracle jdk 1.7.x (current as of 20130728)

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • How to deal this situation

    - by user198725878
    I would like to ask you some guidance here. Once I finished my graduation I join a company for Ruby On Rails. They trained me and put into project for ROR. I have spent 1 year of ROR development. I have done basic things in the given project. Then my company got a project for QT, learned and worked for nearly 7 months. Then my company put into me in iOS development. For the past 1 1/2 years, I have been working in the iOS development till date. Also my main worry is, changing the technology I am working makes me not having in depth knowledge on anything. I mean I can't make myself as expert in any language. What is your opinion? Now my company is going to put me into the cross-platform mobile application development. I am worried now, will this affect my growth path by leaving native development? I am ready to learn Android. As I left web development before 2 year ago, I am finding some odds with me. Should look for iOS job change now? Please let me know your advices.

    Read the article

  • Facebook App EULA & Restrictions: What can't they do that my web app can?

    - by Adam Tannon
    I have written a nifty little web app (in Java/GWT/JS) and have been experimenting with the idea of making it available through Facebook as a Facebook App as well. After spending some time reading Facebook's developer docs, it seems like I can just create a Facebook App to point at any URL I want and use that as the app/canvas. It accomplishes this via iframes. So, my tentative plan is to just point it towards my (existing) web app so that I don't have to totally re-write it. But then that got me thinking: Facebook must regulate what sorts of things can be done through a Facebook App, vs. what an app can't do. For instance, I can't imagine I can point a Facebook App to point at a URL for a web app that accepts e-commerce payments (that would by-pass Facebook altogether and not allow them to take a cut from the ecom transaction!). Also, I can't imagine that Facebook allows developers to point their Facebook Apps to just any old URL without some sort of a scan, otherwise that would open Facebook up to the horrors of every security threat knownst to humanity. I know for a fact that when you write an iOS native app and put it up on the Apple App Store, that Apple actually scans your source code for violations of their EULA. So my question: does Facebook do the same? If so, what are their terms & conditions for what a Facebook app can/can't do? Suprisingly, I can't find this anywhere!! Thanks in advance!

    Read the article

  • Will NTP work in an isolated network ( in absence of a reliable time source)

    - by Anand
    Hi, I am investigatiing a typical NTP problem. The setup is as follows :- FreeBSD is being compiled and run on Opensolaris. The config file on OpenSolaris has entry of linux and another opensolaris machine as server and these server machines are syncing time with themselves (local clock) only. The server machines in this case have NTP running on them as well. Within a few minutes of starting the ntp daemon ,client starts syncing time with itself only and remains in that situation after that.All servers are discarded and no time syncing is done with them. My question is , is there any fundamental problem with this setup. Will the NTP work in such isloated network that has no direct or indirect connection with reliable internet time source ?

    Read the article

  • Filter data in sheets from a master sheet

    - by sam
    I have a 'master sheet' with lots of furniture data in it, in column A there are the suppliers names. What I would like is to be able to have my master sheet with all the info and then sub sheets named by supplier; in these sub sheets I would like to reference the master sheet and pull out all of the items that are from that supplier. For example: I would have a sheet called 'Ikea' which would look in the master sheet and search the A column for all entries of 'Ikea'. If present, copy or reference that row 1:12 in the 'ikea' sheet. I would like to do it all dynamically using references rather than copying the data. Also, I would like it to auto update rather than having to run a macro to recalculate it each time. Can this be done with formulars rather than macros?

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • Setting up Dependencies for Nagios

    - by hfranco
    I'm trying to setup dependencies for a router and several servers. What I want to do is setup the router as the Master Host so if the router fails all other services on the servers won't alert. Unfortunately this is easier said than done. Is there an easy way to setup service dependencies for all services on a server for a Master Host (or my router)? Nagios has some documentation but it will be extremely time consuming to add a single service dependency definition for every service. http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html#servicedependency

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >