Search Results

Search found 22641 results on 906 pages for 'case'.

Page 416/906 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • What and all the areas of Linux a PHP developer should know about? (Like just commands of it or something advanced)

    - by droidsites
    I've developed a website using PHP but I implemented it on Windows OS and hosted it on Windows server. I just searched the PHP job market to know the on-going technology requirement and to keep my knowledge up-to-date accordingly with the job market. I see more are asking for LAMP stack. I understand the sort of skills required for a developer in PHP and MySQL. But coming to the Linux and Apache what kind of the skills exactly companies expect from a developer? On what should I be focusing in case of Linux, Apache whilst developing my website using these LAMP stack? I am going to develop a new website and want it to be using LAMP. But I want to know what difference it makes? Why LAMP stack got more demand in the job market compared to WAMP ? Edit: Sorry I thought my question is creating confusion ... so I put my question in different words as What and all the areas of a Linux a PHP developer should know about? (Like just commands of it or something advanced) Note: I am Linux newbie

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • C drive should only contain OS. Myth or fact?

    - by Fasih Khatib
    So, I have a 500GB HDD @7200RPM. It is split as: C: 97GB D: 179GB E: 188GB My belief is to keep OS ONLY in C:\ and any adamant programs that won't go anywhere apart from C:\ [because this speeds up the PC during startup process] and install programs in D:\ so that in case I have to reinstall the OS, I will have the programs readily available after reinstall. But I have begun to think this approach is flawed because if C:\ is formatted, I will lose registry values and stuff that goes in %appdata% and so it is no use keeping programs in D:/ drive because they will be useless after all. Should I go ahead and install ALL of my programs in C:\ and then use D:\ and E:\ for storing my data like photos, text files, java files n all? How will this impact the performance of the HDD? I only have 3 programs in D:\Program Files so it will be easy to reinstall them :)

    Read the article

  • Who can change the View in MVC?

    - by Luke
    I'm working on a thick client graph displaying and manipulation application. I'm trying to apply the MVC pattern to our 3D visualization component. Here is what I have for the Model, View, and Controller: Model - The graph and it's metadata. This includes vertices, edges, and the attributes of each. It does not contain position information, icons, colors, or anything display related. View - This would commonly be called a scene graph. It includes the 3D display information, texture information, color information, and anything else that is related specifically to the visualization of the model. Controller - The controller takes the view and displays it in a Window using OpenGL (but it could potentially be any 3D graphics package). The application has various "layouts" that change the position of the vertices in the display. For instance, one layout may arrange the vertices in a circle. Is it common for these layouts to access and change the view directly? Should they go through the Controller to access the View? If they go through the Controller, should they just ask for direct access to the View or should each change go through the controller? I realize this is a bit different from the standard MVC example where there a finite number of Views. In this case, the View can change in an infinite number of ways. Perhaps I'm shattering some basic principle of MVC here. Thanks in advance!

    Read the article

  • Single-Click Checkbox in RadGridView

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/06/05/156809.aspxThe Telerik RadGridView for WPF is a flexible visual control for displaying and manipulating tables of data. It’s not without it’s quirks though. I ran into one of those quirks recently. The behavior of the RadGridView requires that you first activate the cell for editing. That enables the editing controls underneath. Although that provides better editing capabilities, it also can be unintuitive. In my case I don’t want my users to have to click a checkbox value twice in order to change the value. This can be solved easily by setting the EditTriggers and AutoSelectOnEdit properties to CellClick and True respectively. Unfortunately, the story doesn’t end there. You would think you could set those properties in a style with a TargetType of GridViewCheckBoxColumn. You’d be wrong. <!-- This works --> <telerik:GridViewCheckBoxColumn Header="Flag #1"       DataMemberBinding="{Binding Flag1}"       EditTriggers="CellClick" AutoSelectOnEdit="True"/> <!-- This doesn’t work --> <Style TargetType="{x:Type telerik:GridViewCheckBoxColumn}">       <Setter Property="EditTriggers" Value="CellClick" />       <Setter Property="AutoSelectOnEdit" Value="True" /> </Style> Telerik told me that is the correct, expected behavior, because the columns in the RadGridView are not visual elements. Their explanation is that the Style property comes from the base class FrameworkContentElement, but the column objects aren’t visual elements. That may be an implementation truth, but that doesn’t make the behavior correct or expected. It’s unintuitive that the GridViewCheckBoxColumn exposes a Style property that cannot be used properly, but there it is. At least there’s a way to get the desired effect.

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Sorting Algorithm : output

    - by Aaditya
    I faced this problem on a website and I quite can't understand the output, please help me understand it :- Bogosort, is a dumb algorithm which shuffles the sequence randomly until it is sorted. But here we have tweaked it a little, so that if after the last shuffle several first elements end up in the right places we will fix them and don't shuffle those elements furthermore. We will do the same for the last elements if they are in the right places. For example, if the initial sequence is (3, 5, 1, 6, 4, 2) and after one shuffle we get (1, 2, 5, 4, 3, 6) we will keep 1, 2 and 6 and proceed with sorting (5, 4, 3) using the same algorithm. Calculate the expected amount of shuffles for the improved algorithm to sort the sequence of the first n natural numbers given that no elements are in the right places initially. Input: 2 6 10 Output: 2 1826/189 877318/35343 For each test case output the expected amount of shuffles needed for the improved algorithm to sort the sequence of first n natural numbers in the form of irreducible fractions. I just can't understand the output.

    Read the article

  • Where can I download Microsoft VC80.CRT version 8.0.50608.0 ?

    - by Leonel
    Hi, I've just installed Microsoft Office Home and Student 2007 in Windows XP SP3 When trying to run MS word or any other office app (Excel, PowerPoint), I get the following message: this application has failed to start because the application configuration is incorrect This also happens with other applications, for instance, Acrobat Reader. A bit of Googling around this error message suggests that this might be due to a mismatch of library versions. In fact, most of the articles and forum messages I found mention a programmer writing an application in Visual C++ and sending it to someone else, who then runs into that error. This is hardly my case ! I looked into Office's files and the manifest file for Excel suggests that Office is trying to use Microsoft VC80.CRT version 8.0.50608.0. However, in my Windows System folder, I can only find the assemblies for 8.0.50727.762 My next step will be trying to find version 8.0.50608.0. How can I find and download it ?

    Read the article

  • Forking a GPL dual licensed software with business owned copyrights

    - by Eric
    After receiving some threats of the copyrights holder of a dual licensed software(GPL2 and commercial) to buy the commercial version for projects in production, I am thinking to make a fork. In a case of GPL2 and commercially dual licensed with business owned copyrights software, is forking the GPL2 version an option? Also, is forking a good way to deal with such cases? Background information The software is a web CMS released under 2 versions a GPL2 free open source edition and a commercial edition including technical support and extra functionality. The problem is that now, basing their argumentation on the "distribution" definition of the GPL2, the company holding the copyrights argue that delivering the software and some extensions to a client is considered as a "distribution". And that such a "distribution" falls under the GPL2 obligation to release the custom made extension code. Custom made extensions are mainly designs, templates and very specific functionality. Basically they give me 3 choices: Buying the commercial licensed edition for projects based on the GPL in production, Deleting all the projects in production based on GPL2 version, Releasing all the extensions as GPL2 code. The first 2 options are nothing realistic for finished projects. The third option could be fine, but as most of the extensions are very specific, cleaning the code to make it usable by other users means lot of works and also I am not sure the clients will appreciate to have their website designs and specific functionality released publicly. The copyrights holding company even contacted some clients directly, giving them the "choice". I know that this is a very corporate interpretation of GPL2, and a such action is nothing close to legal, but as an independent developer, I don't want to take the risk to get involved in some long and tiring legal procedures. PS. This question was first asked on Stack Overflow where it felt out of the scope and closed, after reading the present site FAQ, discussing about software licensing seems fine.

    Read the article

  • Bios wont post unless I restart power supply.

    - by remy
    I have an annoying problem with a computer. The BIOS wont post after it has been turned off and started again unless I turn the switch off/on on the power supply. The fans, cdrom hdd and everything starts running but the bios never beeps and the monitor stays black. I HAVE TRIED: reinstalling windows xp, flashed bios to latest, cmos clear, new power supply, other hdd, new memory sticks, graphics card. I also removed the motherboard from the computer case and laid it on a cardboard box and only connected the PSU, RAM and CPU. Other than this the computer works great but its annoying having to off/on the power supply switch before turning it on after its been off. My guess is that the motherboard is faulty but I would just like to hear what you guys think. Sorry for my english / Remy

    Read the article

  • Is it possible to use the bluetooth adapter from my logitech keyboard to connect headset?

    - by King Chan
    So, as titled. I have a bluetooth wireless keyboard that goes with the bluetooth adapter from Logitech 2 years ago. Recently I just brought a bluetooth headset from other company, but I wonder if I can reuse the bluetooth adapter from Logitech to use the headset? I don't seems to find an option in my control panel that allows me to add the headset device.... Or the blue adapter that comes with Logitech is only able to connect to the logitech keyboard? In that case, if I buy a bluetooth adapter, will it possible to share 1 bluetooth adapter with two device? (1 to 2? not 1 to 1?)

    Read the article

  • Getting around broken DNS

    - by Haedrian
    I have someone who's trying to connect to the internet from a connection where the DNS server seems to be down. They can connect if I give them an ip address, but it can't resolve normal names. I tried getting them to change the DNS used from the internet options (setting it to use Google's) - however that didn't work - apparently the isp captures all dns requests. I also tried with a proxy but that didn't work either. Is there another solution to the problem? This isn't a case of censorship or anything, so there's no need to remain anonymous or whatever - I just need a way of 'forcing' the use of another dns server or routing the internet using something else.

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Compiz: How can I pin osd-lyrics to the Desktop?

    - by joebuntu
    Hey ya, I have certain applications pinned to my desktop. such as Rainlendar (1), which offers this option in the settings window conky (2), with "own_window_type normal" in my ~/.conkyrc Rhythmbox's Desktop Cover Art plugin (3), a python plugin sticked to the desktop. They all hide nicely below all other Windows. I set Compiz to not minimize them on "show desktop" command using "window rules". I also use [osd-lyrics][1] to display music lyrics. It's default behaviour is to stick always on top, which is irritating sometimes and, in my case, looks rather fugly. (4) See screenshot at: http://i.imgur.com/ie16K.jpg Now how can I tell Compiz to pin the osd-lyrics bar to the desktop below all other windows and not minimize it? I tried all sorts of window rules and exceptions, e.g. using "class=osd-lyrics", "title=osd-lyrics" but the OSD doesn't seem to match any of these. Any ideas? Running: Ubuntu 10.10 Maverick Meerkat, Compiz 0.8.6, osd-lyrics 0.3 from code.google.com/p/osd-lyrics

    Read the article

  • Why does my webpage look different when I connect using different routers?! Does routers cache files?

    - by Ayyash
    Here is the case, I am working on a site from office and home, I recently updated the stylesheets and logged in the live site from office (using my same laptop I use all the time), and everything looks okay, I come home use my home internet connection to connect to the site using the SAME laptop, the styles are not updated! The thing is: this happens on ALL browsers, and after emptying the cache many times, and even after one month of work, and even if I have never opened the site before on that browser (as if my router has a cache of its own) Another thing: only one particular styles.css file seem to be hanging Extra info: I use the same IP for my home wireless router as that defined in the office, the usual 192.168.0.1

    Read the article

  • BizTalk Schema Validation

    - by Christopher House
    Perhaps this one should be filed under:  Obvious Yesterday I created a new schema that is going to be used for a WCF receive.  The schema has a bunch of restrictions in it, with the intention that we'd validate incoming messages against the schema.  I'd never done message validation with BizTalk but I knew the XmlDisassembler component had an option for validating, so I figured it would be a piece of cake.  Sadly, that was not to be the case.  I deployed my artifacts and configured my receive location's XmlDisassembler with what I thought to be the correct document spec name.  I entered My.Project.Name.SchemaTypeName for the document spec and started running unit tests.  All of them failed with the following error logged in the event log: "WcfReceivePort_BizTalkWcfService/PurchaseOrderService" URI: "/BizTalkWcfService/PurchaseOrderService.svc" Reason: No Disassemble stage components can recognize the data. I went to the receive port and turned on tracking, submitted another message, then went to the admin console and saved the message.  It looked correct, but just to be sure, I manually validated it against the schema in my project.  As expected, it validated correctly. After a bit of thinking on this, I realized that I probably needed to fully qualify my document spec name, meaning, include the assembly name, as well as the type name.  So, I went back to the receive location and changed the document spec to: My.Project.Name.SchemaTypeName, My.Project.Name,Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx I re-ran my unit tests and everything was working as expected.  So, note to self:  remember to include the assembly name when setting the document spec.  If you need an easy way to determine your schema name and assembly name, find your schema in the admin console and go to it's properties.  On the property screen, look at the Name and Assembly properties.  Your document spec will be "SchemaName, AssemblyName"

    Read the article

  • That Tool is cURLy

    When you just use IE, Firefox or Chrome it can be easy to forget that HTTP is about more then just going to check the latest tech news at Engadget. It is a full and rich protocol, and a great way to experience that richness is the powerful command line utility cURL. cURL has a lot of options, but the syntax starts out simple. You can retrieve the contents of a web page with a simple curl http://blogs.claritycon.com/. The results should be the full text of the web page, tags and all. From there, you can use X to specify the HTTP verb to use, POST, PUT, DELETE, PATCH, etc and d to specify the payload of a POST or PUT. I have found cURL to be incredibly useful for two scenarios. First, as a good way to test basic web services. Second, while working a bit with CouchDB and another document based database, cURL has helped me learn more about RESTful APIs, including different verbs and response codes. cURL is a mainstay in our environments and programming languages precisely because it is simple, powerful and discoverable. I encourage more .NET developers to take a look, bask on the command line for a while and enjoy the plain text of the web. And this excellent logo:     -- Relevant Links -- Its not always the case with manuals, but the manual for cURL is quite useful: http://curl.haxx.se/docs/manual.html To make your command line look a little nicer (and more powerful) on Windows, check out Console and add some transparency effects: http://sourceforge.net/projects/console/Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • what to do when ctrl-c can't kill a process?

    - by Dustin Boswell
    Ctrl-c doesn't always work to kill the current process (for instance, if that process is busy in certain network operations). In that case, you just see "^C" by your cursor, and can't do much else. What's the easiest way to force that process to die now without losing my terminal? Summary of answers below: Usually, you can Ctrl-z to put the process to sleep, and then do "kill -9 process-pid", where you find the process's pid with 'ps' and other tools. On Bash (and possibly other shells) you can do "kill -9 %1" (or '%N' in general) which is easier. If Ctrl-z doesn't work, you'll have to open another terminal and kill from there.

    Read the article

  • Reverse lookup of inode/file from offset in raw device on linux and ext3/4?

    - by lilinjn
    In linux, given an offset into a raw disk device, is it possible to map back to an partition + inode? For example, suppose I know that string "xyz" is contained at byte offset 1000000 on /dev/sda: (e.g. xxd -l 100 -s 1000000 /dev/sda shows a dump that begins with "xyz") 1) How do I figure out which partition (if any) offset 1000000 is located in?(I imagine this is easy, but am including it for completeness) 2) Assuming the offset is located in a partition, how do I go about finding which inode it belongs to (or determine that it is part of free space) ? Presumably this is filesystem specific, in which case does any one know how to do this for ext4 and ext3?

    Read the article

  • Should I care about Junit redundancy when using setUp() with @Before annotation?

    - by c_maker
    Even though developers have switched from junit 3.x to 4.x I still see the following 99% of the time: @Before public void setUp(){/*some setup code*/} @After public void tearDown(){/*some clean up code*/} Just to clarify my point... in Junit 4.x, when the runners are set up correctly, the framework will pick up the @Before and @After annotations no matter the method name. So why do developers keep using the same conventional junit 3.x names? Is there any harm keeping the old names while also using the annotations (other than it makes me feel like devs do not know how this really works and just in case, use the same name AND annotate as well)? Is there any harm in changing the names to something maybe more meaningful, like eachTestMethod() (which looks great with @Before since it reads 'before each test method') or initializeEachTestMethod()? What do you do and why? I know this is a tiny thing (and may probably be even unimportant to some), but it is always in the back of my mind when I write a test and see this. I want to either follow this pattern or not but I want to know why I am doing it and not just because 99% of my fellow developers do it as well.

    Read the article

  • Should all public methods in an abstract class be marked virtual?

    - by Justin Pihony
    I recently had to update an abstract base class on some OSS that I was using so that it was more testable by making them virtual (I could not use an interface as it combined two). This got me thinking whether I should mark all of the methods that I needed virtual, or if I should mark every public method/property virtual. I generally agree with Roy Osherove that every method should be made virtual, but I came across this article that got me thinking about whether this was necessary or not. I am going to limit this down to abstract classes for simplicity, however (whether all concrete public methods should be virtual is especially debatable, I am sure). I could see where you might want to allow a sub-class to use a method, but not want it overriding the implementation. However, as long as you trust that Liskov's Substitution Principle will be followed, then why would you not allow it to be overriden? By marking it abstract, you are forcing a certain override anyway, so, it seems to me that all public methods inside of an abstract class should indeed be marked virtual. However, I wanted to ask in case there was something I might not be thinking. Should all public methods within an abstract class be made virtual?

    Read the article

  • Use HAProxy or Nginx to Load Balance between VPS

    - by xperator
    I want to load balance + failover backup multiple vps webservers hosted on different providers. I heard that for HAProxy you need multiple server under the same subnet, plus a shared (virtual) ip address between load balancers. But it's not possible in my case cause every VPS is on different node/network. Is there a way to use HAProxy in this kind of setup ? ( Please explain how briefly, I don't want to hear your "YES" answer ) What about NginX? Is it possible to achieve same result with Nginx ? (when servers are located on different nets) I know about Round Rubin DNS, but it doesn't provide a real failover solution, neither a load balance between servers.

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >