Search Results

Search found 353 results on 15 pages for 'robin welch'.

Page 1/15 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • DNS Round-robin, Load Balancing, Load sharing, and failover in 2012

    - by user1089770
    I have been reading many posts on serverfault as well as on other sites regarding all these. What I understand is, Multiple A records(round-robin dns) can be used for both : Load sharing (round-robin, but NOT load-balancing). Many people say that “Load Balancing” but I think there will be no load-balancing because “Balance” means (literally) “compare two(or more) and adjust” (and that is what Real s/w or h/w Load balancers do) but Browsers never do this, instead they randomly select and IP and connect to it. It doesn't have any knowledge about the current load of that server (probably, the IP it picked had the highest load!). Automatic failover (latest browsers only). Yes, I think DNS can be used as a simple failover system (at least in 2012, I dont know when it actually "came in effect"). please refer to : http://webmasters.stackexchange.com/questions/10927/using-multiple-a-records-for-my-domain-do-web-browsers-ever-try-more-than-one and Browser-based DNS failover using multiple A records and http://www.nber.org/sys-admin/dns-failover.html I would like to make sure my assumptions/findings are right. So let me know please.....

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Using Round Robin DNS on simple VPN setup

    - by dannymcc
    We have two internet connections which are load balanced to share the load between the two. We set this up after one of the internet provider proved to be less than reliable but great speed and latency wise when it is working. We'd rather utilise both connections as much as possible rather than leave one idle until the other drops out. We have a number of remote workers who occasionally need to connect via VPN from their laptops or iPads, we also have a small number of permanent LAN to LAN tunnels running from smaller branches. Originally we only had one internet connection and used one of our static IP addresses for all VPN users. Now that we have two internet connections running all of the time I am trying to make sure that the VPN is available to our team regardless of which connection drops. So my solution is to create two A records for our domain name with a value of vpn. and the two static IP addresses from each peer. Is this a sensible way of achieving this? Should I expect higher latency due to packets being lost if one peer fails and some packets still get routed to it anyway? A brief mockup of the setup I have:

    Read the article

  • activemq round robin between queues or topics

    - by forkit
    I'm trying to achieve load balancing between different types of messages. I would not know in advance what the messages coming in might be until they hit the queue. I know I can try resequencing the messages, but I was thinking that maybe if there was a way to have the various consumers round robin between either queues or between topics, this would solve my problem. The main problem i'm trying to solve is that I have many services sending messages to one queue with many consumers feeding off one queue. I do not want one type of service monopolizing the entire worker cluster. Again I don't know in advance what the messages that are going to hit the queue are going to be. To try to clearly repeat my question: Is there a way to tell the consumers to round robin between either existing queues or topics? Thank you in advance.

    Read the article

  • L'inventeur du pi-calcul est mort, Robin Milner nous quitte à 76 ans

    L'inventeur du pi-calcul est mort, Robin Milner nous quitte à 76 ans Robin Milner est décédé hier à Cambridge, où il fut professeur à l'Université (ainsi qu'à celles de Londres, Swansea, Edimbourg et Stanford). Informaticien anglais, il a fait trois découvertes principales dans sa carrière, qui ont largement contribué à l'évolution de l'informatique moderne et qui lui valurent de se voir attribuer le prix Turing en 1991 : - LCF, le premier système de preuves automatiques, utilisé pour démontrer automatiquement des assertions mathématiques - Le langage ML - La théorie d'analyse des systèmes concurrents (calculus of communicating systems, CCS) et son successeur, le pi-calcul RIP Robin...

    Read the article

  • Implement Semi-Round-Robin file which can be expanded and saved on demand

    - by ircmaxell
    Ok, that title is going to be a little bit confusing. Let me try to explain it a little bit better. I am building a logging program. The program will have 3 main states: Write to a round-robin buffer file, keeping only the last 10 minutes of data. Write to a buffer file, ignoring the time (record all data). Rename entire buffer file, and start a new one with the past 10 minutes of data (and change state to 1). Now, the use case is this. I have been experiencing some network bottlenecks from time to time in our network. So I want to build a system to record TCP traffic when it detects the bottleneck (detection via Nagios). However by the time it detects the bottlenecking, most of the useful data has already been transmitted. So, what I'd like is to have a deamon that runs something like dumpcap all the time. In normal mode, it'll only keep the past 10 minutes of data (Since there's no point in keeping a boat load of data if it's not needed). But when Nagios alerts, I will send a signal in the deamon to store everything. Then, when Naigos recovers it will send another signal to stop storing and flush the buffer to a save file. Now, the problem is that I can't see how to cleanly store a rotating 10 minutes of data. I could store a new file every 10 minutes and delete the old ones if in mode 1. But that seems a bit dirty to me (especially when it comes to figuring out when the alert happened in the file). Ideally, the file that was saved should be such that the alert is always at the 10:00 mark in the file. While that is possible with new files every 10 minutes, it seems like a bit dirty to "repair" the files to that point. Any ideas? Should I just do a rotating file system and combine them into 1 at the end (doing quite a bit of post-processing)? Is there a way to implement the semi-round-robin file cleanly so that there is no need for any post-processing? Thanks Oh, and the language doesn't matter as much at this stage (I'm leaning towards Python, but have no objection to any other language. It's less of an issue than the overall design)...

    Read the article

  • Round-robin assignment

    - by Robert
    Hi, I have a Customers table and would like to assign a Salesperson to each customer in a round-robin fashion. Customers --CustomerID --FName --SalespersonID Salesperson --SalespersonID --FName So, if I have 15 customers and 5 salespeople, I would like the end result to look something like this: CustomerID -- FName -- SalespersonID 1 -- A -- 1 2 -- B -- 2 3 -- C -- 3 4 -- D -- 4 5 -- E -- 5 6 -- F -- 1 7 -- G -- 2 8 -- H -- 3 9 -- I -- 4 10 -- J -- 5 11 -- K -- 1 12 -- L -- 2 13 -- M -- 3 14 -- N -- 4 15 -- 0 -- 5 etc... I've been playing around with this for a bit and am trying to write some SQL to update my Customers table with the appropriate SalespersonID, but am having some trouble getting it to work. Any ideas are greatly appreciated!

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • Round-Robin DNS in mobile networks

    - by k7k0
    After reading load distribution alternatives and giving my limited skills on the area I'm biased toward round-robin DNS strategy. From what I understood, one key aspect of DNS Round-Robin is setting a low TTL value, avoiding caching. My main concern is that all my traffic comes from mobile networks, almost 30% of that comes from t-mobile 3G. Some questions: 1) Is there a chance that almost all clients on the same mobile network will be redirected to the same IP in the TTL frame? That would kill the distribution technique. 2) If I choose a really low TTL (zero or one). That impacts directly over client performance? It does a DNS miss every time or it's a setting that only impacts on DNS servers? Any help would be much appreciated. Thanks

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • DNS Round-robin failover and load balancing

    - by Tom O'Connor
    Having read all of the questions and answers (1 2 3 and so on) on here relating to DNS load balancing, and Round-robin DNS, there's still a number of unanswered questions.. Large companies, and I'm looking at Google, Facebook and Twitter here, do present multiple A records. 1) If DNS loadbalancing/failover is so dodgy, why do large organisations do it? There seems to be very little mention of "DNS Pinning", despite this (PDF) paper about it. 2) Why is DNS Pinning so seldom mentioned? 3) Are there any concrete examples of which ISPs and so on actually do rewrite DNS TTLs? That said, I'm not entirely backing the side for using DNS for failover or any form of load balancing. For most networks, BGP diverse routing still seems to be a better fit. DNS rears it's ugly head again. :(

    Read the article

  • Using awstats with a round-robin DNS configuration

    - by Shaun
    I have a website with multiple web servers whose access is controlled via a round-robin DNS. We currently use Google Analytics for site traffic monitoring but were looking to move to awstats due to concerns of inaccuracy with Google Analytics and using third-party trackers in general. I have a little experience with awstats and I know it gets its information from parsing server logs. How would this work when you have multiple web servers logging independently to separate locations? Is this supported with awstats? Is there an alternative I could use to track traffic activity directly on my servers?

    Read the article

  • Round robin DNS for dynamic website

    - by Uwe
    We want to setup multiple servers hosting the same site. Each server (iis6 or iis7) is on its own. Meaning it does not sjare any information with the others. They are not even in the same country. The problem we encounter is that if we setup a round-robin DNS (multiple IDs under one Domainname) is that the client (browser) switches the servers so that the asp.net session gets lost. The question is how do we set this up, so the clients are randomly send to one of the servers and if one fails the users go to the next one. But if a user is using one of the it should stay there. Thank you!

    Read the article

  • postfix concurrency limit with round robin dns

    - by goose
    Take the following internal round robin dns setup mymta.com. IN A 172.31.1.1 mymta.com. IN A 172.31.1.2 mymta.com. IN A 172.31.1.3 mymta.com. IN A 172.31.1.4 mymta.com. IN A 172.31.1.5 mymta.com. IN A 172.31.1.6 mymta.com. IN A 172.31.1.7 mymta.com. IN A 172.31.1.8 mymta.com. IN A 172.31.1.9 mymta.com. IN A 172.31.1.10 Now assume the following postfix setup (assume these are the only tweaks from defaults in debian package) main.cf: smtp_connection_cache_destinations = mymta.com smtp_connection_cache_reuse_limit = 750 smtp_destination_concurrency_limit = 75 transport * :[mymta.com] I would expect 75 concurrent connections spread across the 10 A records I've set in DNS. However I'm seeing more than a few hundred connections to mymta.com and I'm wondering if Postfix is "smart" enough to set up 75 concurrent connections for each IP address. Thoughts?

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • How to make a Round Robin? or Is there an easier way other than Round Robin?

    - by candies
    The problem that I face is in what way if there is issue like the example below: Codes 1000, 2000, 3000, 4000, 5000 ID 1, 2, 3 ======================================== This: ID number 1 has codes 1000, 2000, 3000, 4000 ID number 2 has codes 2000, 4000, 3000 ID number 3 has codes 3000, 4000, 5000 ======================================== When all the fields are connected, each ID has found the same codes. From the example above, I want to produce fair result and adjusted to the code that it had before on each ID as below: ======================================== To be: ID number 1 has codes 1000, 2000 (1000 must be on number 1 cause only it has than other) ID number 2 has codes 3000, 4000 ID number 3 has codes 5000 (5000 must be on number 3 cause only it has than other) ======================================== Some say using Round Robin, but I never heard Round Robin before and I don't have idea how to use it, such a blank mind. Is there another easier way like to use PHP may be? I'm lost. Thanks.

    Read the article

  • Does RabbitMq do round-robin from the exchange to the queues

    - by Lancelot
    Hi, I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >