Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 502/588 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • One Windows Domain workstation can ping gateway but gets no internet access

    - by dindeman
    One of the (Windows XP SP3) workstations of our Windows Domain could not access internet anymore, this problem suddenly happened overnight. The domain controllers (there are three of them) are all running Windows Server 2008. First I compared the output of ipconfig /all on the faulty workstation with the output of a working workstation and it was just fine as it had always been. In particular the default gateway was correct and always remained pingable from the faulty workstation. I guessed that something was wrong with the DHCP service and I restarted the DHCP server service on all of our three DCs as well as the DHCP client service on the faulty workstation. This didn't solve the issue. I then thought of renewing the DHCP lease with ipconfig /release and ipconfig /renew and here is my first question: why did this never work? The same IP address (192.168.0.45) kept being assigned despite all my attempts to renew it (note that all our workstation are getting their TCP/IP automatically.) Even by leaving the domain and changing the computer name the same address was yet again obtained... Anyway I then proceeded to switch the TCP/IP configuration for that machine manually to another free valid IP address (192.168.0.41)... and then the internet access came back! I then cleared any traces of the previous IP in the DHCP leases list and in the DNS tables of our DCs and, after setting back the TCP/IP configuration to 'automatic', finally, the new lease would be granted (192.168.0.41) alongside with the internet access. My second question: what went suddenly wrong with the original IP address?

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Network problems that might be related to NAT

    - by nenne
    Hello, I have an odd setup where there is a router(Router 2) routing between network network 1 and network 2. One router(Router 1) with nat for internet access that routes between internet and network 1. There are people in both of these networks. All the clients in network 1 can access the internet, the clients in network 2 can access the clients in network 1 and can also access the router 1. Router 1 can also access clients in network 2. However, the clients in network 2 cannot reach the internet. I cannot think about anything in the routing tables that would hinder this, since Router 1 can reach the clients in network 2 and vice versa. Can it be that nat starts the session between router 2 and the internet site/machine instead of the client and the internet machine? Does anyone have any ideas? I have very little control over router 2(its basicly an ISP vpn net service) but full access to router 1. Its an ubuntu 10.04 with iptables for nat/firewall setup.

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • Use both OpenVPN & eth0 together

    - by shadyabhi
    I connect to a VPN using openVPN. Now, after the connection is established, all my traffic goes through tun0. My LAN gateway is 10.100.98.4... So, for apps to use my direct internet connnection I did sudo route add default gw 10.100.98.4 But, I cant use tun0 now. I know this because curl --interface tun0 google.com doesnt give me anything.. How do I go about using both connections simultaneously. How can I achieve that? ROUTING TABLES:- Without VPN running:- Destination Gateway Genmask Flags Metric Ref Use Iface 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0 With VPN:- Destination Gateway Genmask Flags Metric Ref Use Iface 10.10.0.1 10.10.54.230 255.255.255.255 UGH 0 0 0 tun0 10.10.54.230 * 255.255.255.255 UH 0 0 0 tun0 free-vpn.torvpn 10.100.98.4 255.255.255.255 UGH 0 0 0 eth0 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 default 10.10.54.230 0.0.0.0 UG 0 0 0 tun0 After the route command- Destination Gateway Genmask Flags Metric Ref Use Iface 10.10.0.1 10.10.54.230 255.255.255.255 UGH 0 0 0 tun0 10.10.54.230 * 255.255.255.255 UH 0 0 0 tun0 free-vpn.torvpn 10.100.98.4 255.255.255.255 UGH 0 0 0 eth0 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0 default 10.10.54.230 0.0.0.0 UG 0 0 0 tun0

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Access 2010 datasheet view only/relationships unavailable

    - by Luis
    I'm relatively new to MS Access in general and just started working with Access 2010. I've created a new web database with a few tables that I need to relate. First problem: For the life of me, I can't view anything in any view other than datasheet view; everywhere I would expect to be able to change the view, only datasheet view is available. Second problem: I can't change the primary key(s). Presumably I would be able to do this if I could get out of datasheet view and into design view. Third problem: The 'Relationships' button is greyed out. I know these appear to be really simple things but I've been looking for much more time than I'd like to admit trying to figure out how to get unstuck. Update: It would appear that this is happening because it is a 'web database' as I've been able to do all of the above in a new regular database. With this in mind let me ask a different question: Am I able to add relationships and change primary keys in a web database? If so how? More generally, what is the point of a web database?

    Read the article

  • retain last used path to location for saving files in Windows 7

    - by Mark Miller
    I am using Microsoft Office 2010 and Windows 7 on a Dell PC. I am opening a bunch of MSWord files one at a time, copying data tables therein, pasting the data into Excel and saving the Excel files as comma delimited text files. I am creating a separate Excel file for each MSWord file. The path to the folder containing the saved comma-delimited files is quite long, something like this: c:\users\me\aa\bb\cc\dd\ee\ Every time I open Excel and save a new comma-delimited file I have to re-navigate the entire path (c:\users\me\aa\bb\cc\dd\ee). In the past Windows seemed to remember the last used path, saving a lot of tedious key-strokes. In fact, I think Windows did this for me as recently as last week, albeit on a different computer. Can I apply a setting in Windows somewhere asking it to offer the last used path as a default when saving files so I do not have to re-navigate the entire directory structure to save each new comma-delimited file? If I can, how so? Where is the option for specifying that setting? Thank you for any help.

    Read the article

  • Secure NAT setup with iptables

    - by TheBigB
    I have Debian running device that needs to act as an internet-gateway. On top of that I want to provide a firewall that not only blocks inbound traffic, but also outbound traffic. And I figured iptables should be able to do the job. The problem: I've configured NAT properly (I think?), but once I set the default policy to DROP and add rules to for instance allow HTTP traffic from inside the LAN, HTTP is not going through. So basically my rules don't seem to work. Below is the initialization script that I use for iptables. The device has two NICs, respectively eth0 (the WAN interface) and eth1 (the LAN interface). echo 1 > /proc/sys/net/ipv4/ip_forward # Flush tables iptables -F iptables -t nat -F # Set policies iptables -P INPUT DROP iptables -P OUTPUT DROP # NAT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT # Allow outbound HTTP from LAN? iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT Can anyone shed some light on this?

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • Server taking too long to respond error

    - by DCJones
    Hi, This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0

    Read the article

  • Getting Run time 1004 error in code

    - by krishna123
    I tried the code provided by vba express for combining sheet, while execution it is displaying Run Time error 1004: Application Defined or Object Defined Error: My Scenario is: I have a Excel, in that I have first sheet "Connection" and after it I have Sheet1, Sheet2 and so on. I am combining all sheets except Sheet"Conection" by saying start with sheet2. I tried following line of code to exclude "Connection" sheet: If Not Sheet.Name = "Connection" then but it did not work. Whatever the sheets I have in some of them I have large data in some cells. Here is the code which I am using: I have highlighted the line Sub CopyFromWorksheets() Dim wrk As Workbook 'Workbook object - Always good to work with object variables Dim sht As Worksheet 'Object for handling worksheets in loop Dim trg As Worksheet 'Master Worksheet Dim rng As Range 'Range object Dim colCount As Integer 'Column count in tables in the worksheets Set wrk = ActiveWorkbook 'Working in active workbook For Each sht In wrk.Worksheets If sht.Name = "Master" Then sht.Delete Exit Sub End If Next sht 'We don't want screen updating Application.ScreenUpdating = False 'trg.SaveAs "C:\temp\CPReport1.xls" 'Add new worksheet as the last worksheet Set trg = wrk.Worksheets.Add(After:=wrk.Worksheets(wrk.Worksheets.Count)) 'Rename the new worksheet trg.Name = "Master" 'Get column headers from the first worksheet 'Column count first Set sht = wrk.Worksheets(2) colCount = sht.Cells(1, 255).End(xlToLeft).Column 'Now retrieve headers, no copy&paste needed With trg.Cells(1, 1).Resize(1, colCount) .Value = sht.Cells(1, 1).Resize(1, colCount).Value 'Set font as bold .Font.Bold = True End With trg.SaveAs "C:\temp\CPReport1.xls" 'We can start loop 'Skip Sheet - Connection If Not sht.Name = "Connection" Then For Each sht In wrk.Worksheets 'If worksheet in loop is the last one, stop execution (it is Master worksheet) If sht.Index = wrk.Worksheets.Count Then Exit For End If 'Data range in worksheet - starts from second row as first rows are the header rows in all worksheets Set rng = sht.Range(sht.Cells(2, 1), sht.Cells(65536, 1).End(xlUp).Resize(, colCount)) 'Put data into the Master worksheet '----------------- Error in below line -------------------------------------------------- trg.Cells(65536, 1).End(xlUp).Offset(1).Resize(rng.Rows.Count, rng.Columns.Count).Value = rng.Value '---------------------------------------------------------------------------------------- Next sht End If 'Fit the columns in Master worksheet trg.Columns.AutoFit 'Dim dest, destyfile 'dest = "E:\Test_Merge\" 'destyfile = dest & "_" & trg.Name 'trg.SaveAs (destyfile) 'Screen updating should be activated Application.ScreenUpdating = True End Sub

    Read the article

  • Locate devices within a building

    - by ams0
    The situation: Our company is spread between two floors in a building. Every employee has a laptop (macbook Air or MacbookPro) and an iPhone. We have static DHCP mappings and DNS resolution so every mobile gets a name like employeeiphone.example.com, every macbook air gets a employeelaptop.example.com and every macbook pro gets a employeelaptop.example.com on the Ethernet interface (the wifi gets a dynamic IP from a small range dedicated for the purpose). We know each and every MAC address of phones and laptops, since we do DHCP static mapping (ISC DHCP server runs on linux). At each floor we have a Netgear stack of two switches, connected via 10GB fiber to each other. No VLANs so far. At every floor there are 4 Airport Extreme making a single SSID network with WPA2 authentication. The request: Our CTO wants to know who is present at which floor. My solution (so far): Every switch contains an table listing MAC address and originating port. On each switch stack, all the MAC addresses coming from the other floor are listed as coming on port 48 (the fiber link). So I came up with: 1) Get the table from each switch via SNMP 2) Filter out the ones associated with port 48 3) Grep dhcpd.conf, removing all entries not *laptop and not *iphone 4) Match the two lists for each switch, output in JSON or XML 5) present the results on a dashboard for all to see I wrote it in bash with a lot of awk and sed, it kinda works but I always have for some reason stale entries in the switch lookup tables, making it unreliable; some people may have put their laptop to sleep, their iphones drop connections after a while, if not woken up and so on..I searched left and right, we are prepared to spend a little on the project too (RFIDs?), does anybody do something similar? I can provide with the script if needed (although it's really specific to our switches and naming scheme). Thanks! p.s. perhaps is this a question for stackoverflow? please move if it so.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • WebLogic job scheduling

    - by XpiritO
    Hello, overflowers :) I'm trying to implement a WebLogic job scheduling example, to test my cluster capabilities of fail-over on scheduled tasks (to ensure that these tasks are executed on fail over scenario). With this in mind, I've been following this example and trying to configure everything accordingly. Here are the steps I've done so far: Configured a cluster with 1 admin server (AdminServer) and 2 managed instances (Noddy and Snoopy); Set up database tables (using Oracle XE): ACTIVE and WEBLOGIC_TIMERS; Set up data source to access DB and associated it to the scheduling tasks under "Settings for cluster" "Scheduling"; Implemented a job (TimerListener) and a servlet to initialize the job scheduling, as follows: . package timedexecution; import java.io.IOException; import java.io.PrintWriter; import java.io.Serializable; import java.text.SimpleDateFormat; import java.util.Date; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import commonj.timers.Timer; import commonj.timers.TimerListener; import commonj.timers.TimerManager; public class TimerServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected static void logMessage(String message, PrintWriter out){ out.write("<p>"+ message +"</p>"); System.out.println(message); } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); // out.println("<html>"); out.println("<head><title>TimerServlet</title></head>"); // try { // logMessage("service() entering try block to intialize the timer from JNDI", out); // InitialContext ic = new InitialContext(); TimerManager jobScheduler = (TimerManager)ic.lookup("weblogic.JobScheduler"); // logMessage("jobScheduler reference " + jobScheduler, out); // jobScheduler.schedule(new ExampleTimerListener(), 0, 30*1000); // logMessage("Timer scheduled!", out); // //execute this job every 30 seconds logMessage("service() started the timer", out); // logMessage("Started the timer - status:", out); // } catch (NamingException ne) { String msg = ne.getMessage(); logMessage("Timer schedule failed!", out); logMessage(msg, out); } catch (Throwable t) { logMessage("service() error initializing timer manager with JNDI name weblogic.JobScheduler " + t,out); } // out.println("</body></html>"); out.close(); } private static class ExampleTimerListener implements Serializable, TimerListener { private static final long serialVersionUID = 8313912206357147939L; public void timerExpired(Timer timer) { SimpleDateFormat sdf = new SimpleDateFormat(); System.out.println( "timerExpired() called at " + sdf.format( new Date() ) ); } } } Then I executed the servlet to start the scheduling on the first managed instance (Noddy server), which returned as expected: (Servlet execution output) service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: Which resulted in the creation of 2 rows in my DB tables: WEBLOGIC_TIMERS table state after servlet execution: "EDIT"; "TIMER_ID"; "LISTENER"; "START_TIME"; "INTERVAL"; "TIMER_MANAGER_NAME"; "DOMAIN_NAME"; "CLUSTER_NAME"; ""; "Noddy_1268653040156"; "[datatype]"; "1268653040156"; "30000"; "weblogic.JobScheduler"; "myCluster"; "Cluster" ACTIVE table state after servlet execution: "EDIT"; "SERVER"; "INSTANCE"; "DOMAINNAME"; "CLUSTERNAME"; "TIMEOUT"; ""; "service.SINGLETON_MASTER"; "6382071947583985002/Noddy"; "QRENcluster"; "Cluster"; "10.03.15" Although, the job is not executed as scheduled. It should print a message on the server's log output (Noddy.out file) with a timestamp, saying that the timer had expired. It doesn't. My log files state as follows: Admin server log (myCluster.log file): ####<15/Mar/2010 10H45m GMT> <Warning> <Cluster> <test-ad> <Noddy> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268649925727> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> Noddy server log (Noddy.out file): service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: <15/Mar/2010 10H45m GMT> <Warning> <Cluster> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> (Noddy.log file): ####<15/Mar/2010 11H24m GMT> <Info> <Common> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268652270128> <BEA-000628> <Created "1" resources for pool "TxDataSourceOracle", out of which "1" are available and "0" are unavailable.> ####<15/Mar/2010 11H37m GMT> <Info> <Cluster> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1268653040226> <BEA-000182> <Job Scheduler created a job with ID Noddy_1268653040156 for TimerListener with description timedexecution.TimerServlet$ExampleTimerListener@2ce79a> ####<15/Mar/2010 11H39m GMT> <Info> <JDBC> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268653166307> <BEA-001128> <Connection for pool "TxDataSourceOracle" closed.> Can anyone help me out discovering what's wrong with my configuration? Thanks in advance for your help!

    Read the article

  • group dynamic data from a List

    - by prince23
    public class SampleProjectData { public static ObservableCollection<Product> GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project2", OverallStartTime = dtS + TimeSpan.FromDays(1.5), OverallEndTime = dtS + TimeSpan.FromDays(5.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Victor's Task" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project3", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Jason's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(7), EndTime = dtS + TimeSpan.FromDays(9), TaskName = "Jason's Task 2" }); teams[0].Projects.Add(emp); teams.Add(new Product() { PDName = "Product2", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project4", OverallStartTime = dtS + TimeSpan.FromDays(0.5), OverallEndTime = dtS + TimeSpan.FromDays(3.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Vicky's Task" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project5", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2.2), EndTime = dtS + TimeSpan.FromDays(3.8), TaskName = "Oleg's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(6), TaskName = "Oleg's Task 2" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(8), EndTime = dtS + TimeSpan.FromDays(9.6), TaskName = "Oleg's Task 3" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project6", OverallStartTime = dtS + TimeSpan.FromDays(2.5), OverallEndTime = dtS + TimeSpan.FromDays(4.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(0.8), EndTime = dtS + TimeSpan.FromDays(2), TaskName = "Kim's Task" }); teams[1].Projects.Add(emp); teams.Add(new Product() { PDName = "Product3", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project7", OverallStartTime = dtS + TimeSpan.FromDays(5), OverallEndTime = dtS + TimeSpan.FromDays(7.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Balaji's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(8), TaskName = "Balaji's Task 2" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project8", OverallStartTime = dtS + TimeSpan.FromDays(3), OverallEndTime = dtS + TimeSpan.FromDays(6.3) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.75), EndTime = dtS + TimeSpan.FromDays(2.25), TaskName = "Li's Task" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project9", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2), EndTime = dtS + TimeSpan.FromDays(3), TaskName = "Stacy's Task" }); teams[2].Projects.Add(emp); return teams; } } above is an sample data where i am grouping them with static data in the same way i need to for teh data which is cmg from DB and i need to store them list all these three data are comig from different services. and i am storing them in a list now i have three tables data Product , Project, Task. all the data are coming from webservies. i have created three list where i am storing the data in list. Listobjpro= new List(); Listobjproduct= new List(); LIstobjTask= new List(); now what i need to do is i need to do the mapping between these tables. if you see above. i have object of Product under Product i have added object of Project and then under project object i have added task object. now from the above data which is stored in the list i need to do the same mapping between class. and group the data. public class Product : INotifyPropertyChanged { public Product() { this.Projects = new ObservableCollection<Project>(); } public string PDName { get; set; } public ObservableCollection<Project> Projects { get; set; } private DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } private DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Project : INotifyPropertyChanged { public Project() { this.Tasks = new ObservableCollection<Task>(); } public string PName { get; set; } public ObservableCollection<Task> Tasks { get; set; } DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Task : INotifyPropertyChanged { public string TaskName { get; set; } DateTime _st; public DateTime StartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("StartTime"); this.EndTime = value + dur; } } } private DateTime _et; public DateTime EndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("EndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion }

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >