Search Results

Search found 71953 results on 2879 pages for 'load data infile'.

Page 14/2879 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Alternatives for load balancing Tomcat webapp?

    - by Mahriman
    We wish to enable some form of load balancing for a Tomcat webapp we're currently using today. Unfortunately, I know almost to nothing about Tomcat load balancing, clustering and so on. Can anyone share resources that cover the different alternatives, give some handy pointers (maybe some solutions work better in certain types of environment?) or just some tips on solutions to try out? We're currently running Tomcat 5.5 if that makes any difference in features, however no critical obstacles for upgrading to 6.

    Read the article

  • Servers behind load balancer

    - by Tom
    We have a CISCO hardware load balancer with two web servers behind it. We'd like to force some URLs to only be served by one of the machines. Firstly, is the job of the load balancer? or would a better approach be create a subdomain such as http://assets.example.com which would be automatically be routed to one of the servers?

    Read the article

  • Do i need a dedicated server for load balancing?

    - by Ben
    I'm completely new to the concept of load balancing so i hope this question isn't a "stupid question" because i've been searching around and im having a hard time understanding this. So to my understanding, in order to load balance, i need a separate machine with an ip address i can direct all traffic to. I initially thought i needed to rent 3 dedicated servers, one for load balancing and the other two as backend servers. Would a dedicated server be too much for a load balancer or do hosting companies have special types of computers for that process? Then i read somewhere else that i can install a load balance software in both of the two servers and configure it in a way that doesn't require me to rent another machine/dedicated server for load balancing. So im a bit confuse on how to actually implement a load balancer and whether or not i need a dedicated server for the sole purpose of acting as a load balancing machine. Also, i was recommended to use HAproxy so i'll be heading that direction for load balancing.

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • How do you handle the fetchxml result data?

    - by Luke Baulch
    I have avoided working with fetchxml as I have been unsure the best way to handle the result data after calling crmService.Fetch(fetchXml). In a couple of situations, I have used an XDocument with LINQ to retrieve the data from this data structure, such as: XDocument resultset = XDocument.Parse(_service.Fetch(fetchXml)); if (resultset.Root == null || !resultset.Root.Elements("result").Any()) { return; } foreach (var displayItem in resultset.Root.Elements("result").Select(item => item.Element(displayAttributeName)).Distinct()) { if (displayItem!= null && displayItem.Value != null) { dropDownList.Items.Add(displayItem.Value); } } What is the best way to handle fetchxml result data, so that it can be easily used. Applications such as passing these records into an ASP.NET datagrid would be quite useful.

    Read the article

  • Generate Entity Data Model from Data Contract

    - by CSmooth.net
    I would like to find a fast way to convert a Data Contract to a Entity Data Model. Consider the following Data Contract: [DataContract] class PigeonHouse { [DataMember] public string housename; [DataMember] public List<Pigeon> pigeons; } [DataContract] class Pigeon { [DataMember] public string name; [DataMember] public int numberOfWings; [DataMember] public int age; } Is there an easy way to create an ADO.NET Entity Data Model from this code?

    Read the article

  • Detecting a Lightweight Core Data Migration

    - by hadronzoo
    I'm using Core Data's automatic lightweight migration successfully. However, when a particular entity gets created during a migration, I'd like to populate it with some data. Of course I could check if the entity is empty every time the application starts, but this seems inefficient when Core Data has a migration framework. Is it possible to detect when a lightweight migration occurs (possibly using KVO or notifications), or does this require implementing standard migrations? I've tried using the NSPersistentStoreCoordinatorStoresDidChangeNotification, but it doesn't fire when migrations occur.

    Read the article

  • weird problem with load () or live () !!

    - by silversky
    I load a page with load () and then I create dinamically a tag. Then I use live() to bind a click event and fires a function. At the end a call unload (). The problem is that when I load the same page again ( without refresh ) when on click the function will be fired twice. If I exit again (again with unload ()) and load the page again on click will fire 3 times and so on .... A sample of my code is: $('#tab').click(function() { $('#formWrap').load('newPage.php'); }); $('div').after('<p class="ctr" ></p>'); $('p.ctr').live('click', function(e) { if($(e.target).is('[k=lf]')) { console.log ('one'); delete ($this); } else if .... }); function delete () { $.post( 'update.php', data); } I have other $.post inside on this page and also on the above live fnc and all work well. The above one also works but like I said on the second load will fire twice and the 3 times and so on ... The weird part for me is that if replece the console with console.log ('two'); save the page and load the page without refresh it will fire on a different rows - one two - if I unload the page replace the console with console.log ('three'); and load again will fire one two and three. I try to use: $.ajax({ url: 'updateDB.php', data: data, type: 'POST', cache:false }); $.ajaxSetup ({ cache: false }); header("Cache-Control: no-cache"); none of this it's working. And I have this problem only on this fnc. What do you think, it could be the reason, it remembers it remembers the previous action and it fires again?

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • what is best way to store long term data in iphone Core Data or SQLLite?

    - by AmitSri
    Hi all, I am working on i-Phone app targeting 3.1.3 and later SDK. I want to know the best way to store user's long term data on i-phone without losing performance, consistency and security. I know, that i can use Core Data, PList and SQL-Lite for storing user specific data in custom formats.But, want to know which one is good to use without compromising app performance and scalability in near future. Thanks

    Read the article

  • How to view existing data in Core Data?

    - by mshsayem
    Well, may be this question is silly, but I couldn't find a way (except programmatically). I built a project (for iPhone OS 3.0) which uses Core Data. The xcdatamodel file shows the schema description, but I want to see the data in tabular form (like the management studio for mssql server or phpmyadmin for mysql). Is there any way (except coding)? What is that? Also, which file (in disk/device) those data are stored into? [ I built the tutorial (from apple) on Core Data, named Locations. They used this line somewhere in the code: NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"Locations.sqlite"]]; But, I did not see any "xxxxx.sqlite" file in project location (nor in the disk).]

    Read the article

  • Efficient alternatives to merge for larger data.frames R

    - by Etienne Low-Décarie
    I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size1 million / 300 KB RData file) data frames. "merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system. Example load test data frame and try test.merged<-merge(test, test) or test.merged<-join(test, test, type="all") - The following post provides a list of merge and alternatives: How to join data frames in R (inner, outer, left, right)? The following allows object size inspection: https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/ Data produced by anonym

    Read the article

  • How to load file into mysql DB on a shared hosting platform?

    - by Vasu
    A process running on my machine collects data from various websites and stores it in the local mysql db. Same data is exported using SELECT INTO OUTFILE and FTPed to the shared host every few hours. My hosting provider doesn't allow LOAD DATA INFILE to be executed on the shared host? What are my other options for automated/scheduled load to MYSQL db on my shared host?

    Read the article

  • find the top K most frequent numbers in a data stream

    - by Jin
    This is more of a data structure question rather than a coding question. If I am fetching a data stream, i.e, I keep receiving float numbers once at a time, how should I keep track of the top K frequent numbers? Here my memory is 4G and I prefer to have less communication with hard drive unless necessary. I think heap is good for updating the max and min. How should I design the data structure? Thanks

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • High load without explanation

    - by Sebastian
    I have a very high load on my machine and don't know what is responsible or how to find out. On the machine runs a jboss appserver and mysql. Here is a top from the user at peak time: top - 16:23:01 up 101 days, 6:50, 1 user, load average: 23.42, 21.53, 24.73 Tasks: 9 total, 1 running, 8 sleeping, 0 stopped, 0 zombie Cpu(s): 17.2%us, 1.6%sy, 0.0%ni, 80.4%id, 0.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 16440784k total, 16263720k used, 177064k free, 151916k buffers Swap: 16780872k total, 30428k used, 16750444k free, 8963648k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27344 b 40 0 16.0g 6.5g 14m S 169 41.7 1184:09 java 6047 b 40 0 11484 1232 1228 S 0 0.0 0:00.01 mysqld_safe 6192 b 40 0 604m 182m 4696 S 0 1.1 93:30.40 mysqld 7948 b 40 0 84036 1968 1176 S 0 0.0 0:00.07 sshd 7949 b 40 0 14004 2900 1608 S 0 0.0 0:00.03 bash 7975 b 40 0 8604 1044 840 S 0 0.0 0:00.44 top The CPU usage of the java process is normal. The peaks only show up when i deployed a certain web application. Could the resulting network traffic boost the load in such way that i don't see it in top?

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • Cisco Catalyst 3550 + Alteon 184 Load-Balancing Issues

    - by upkels
    I have just deployed a couple Cisco Catalyst 3550 switches, and a couple Alteon 184 Web Switches for load-balancing. I can ping all RIPs and VIPs to/from the Alteon. Topology Before: (server) <- (Alteon) <- (Internet) Topology Now: (server) <- (3550) <- Alteon <- (Internet) Cisco Port Configuration (Alteon Uplink Port): description LB_1_PORT_9_PRIMARY switchport access vlan 10 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 9 Configuration (VLAN 10 WAN): >> Main# /c/port 9/cur Current Port 9 configuration: enabled pref fast, backup gig, PVID 10, BW Contract 1024 name UPLINK >> Main# /c/port 9/fast/cur Current Port 9 Fast link configuration: speed 100, mode full duplex, fctl none, auto off Cisco Configuration (Load-Balanced Servers Port): description LB_1_PORT_1_PRIMARY switchport access vlan 30 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 1 Configuration (VLAN 30 LOAD-BALANCED LAN): >> Main# /c/port 1/cur Current Port 1 configuration: enabled pref fast, backup gig, PVID 30, BW Contract 1024 name LB_PORT_1 >> Main# /c/port 1/fast/cur Current Port 1 Fast link configuration: speed 100, mode full duplex, fctl both, auto on Each of my servers are on vlan 10 and 30, properly communicating. I have tried to turn on VLAN tagging on the Alteon, however it seems to cause all communications to stop working. When I tcpdump -i vlan30 on any of the webservers, I see normal ARP communications, and some STP communications, which may or may not be part of the problem: ... 15:00:51.035882 STP 802.1d, Config, Flags [none], bridge-id 801e.00:11:5c:62:fe:80.8041, length 42 15:00:51.493154 IP 10.1.1.254.33923 > 10.1.1.1.http: Flags [S], seq 707324510, win 8760, options [mss 1460], length 0 15:00:51.493336 IP 10.1.1.1.http > 10.1.1.254.33923: Flags [S.], seq 3981707623, ack 707324511, win 65535, options [mss 1460], len gth 0 15:00:51.493778 ARP, Request who-has 10.1.3.1 tell 10.1.3.254, length 46 etc... I'm not sure if I've provided enough information, so please let me know if any more is necessary. Thank you!

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >