Search Results

Search found 1319 results on 53 pages for 'epm bi'.

Page 46/53 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. Edit Additional info.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Virtualizing an Inline network appliance with VirtualBox (or VMWare)

    - by Tzury Bar Yochay
    My device, which is a Linux based IP in-liner is transparent to the network peripherals, that is, no IP address assigned to any of its interfaces. For the sake of the conversation, let's use ADSL connection as an example, while the device is inspecting the bi-directional traffic, the network is behaving same as if device was not there, attached to the wire (see Physical setup at the attached diagram). I wonder if I can enclosed that "device" within a Windows machine and have it operated virtually so it still seats inline between the ADSL router and the Windows netwroking interface by using virtual NICs, (or whatever their name is in windows), and inspecting the traffic, same as if it was on a separate physical device, the drawing under "Virtual Setup" in the attached diagram show what I am trying to achieve. Reading a bit on the VirtualBox docs, seems like binding the right side is relatively simple, perhaps I should have one network adapter set as Bridge Networking and VirtualBox will connect it to the physical NIC on the host machine, and network packets are exchanged directly, circumventing the host operating system's network stack (WinXP in my case). However, I have no idea how to achieve the left side of my diagram, which requires adding virtual NICs to windows and configure them correctly in a way to make that pipeline possible. I would appreciate any help. by the way, if that is not possible with VirtualBox but with other virtualization solution (e.g. VMWare), I would accept the other as well.

    Read the article

  • How can I redirect/forward all the UDP/TCP traffic on one interface to another interface in OpenWrt

    - by Sina Sou
    I am new to networking and I have a measurement device (D) that periodically sends all its readings over few UDP multicast sockets (with different multicast IP addresses and different port numbers). That device even listens to a TCP socket simultaneously to modify its configuration on port 7234. Since the device has just a Ethernet interface for communication and I want to make it work wireless, I decided to use a very small wireless open-wrt based router that attaches to the device (D) and redirect/forward all the network traffic(Both UDP/TCP) to the router wireless interface. In order to simplify the problem assume that the Device (D) establishes following sockets (at the same time) UM_SOCK1: UDP mcast socket on 239.1.2.3 port# 50620 UM_SOCK2: UDP mcast socket on 239.1.2.4 port# 50640 TC_SOCK3: TCP DHCP/STATIC ip address 192.168.1.200 port 7234 And (D) is connected to Open-Wrt router (R) via interface en01 (Ethernet) the router has it own wireless interface on (wlan0) I want all the traffic from interface pass through wlan01 and vice versa (bi-directional) en01 <---- wlan01 What would be the minimum iptables or ... commands that I need to make this possible? Even I am wondering if traffic directing can be made easier like if the direction is not going to be based on IP addresses(not desired if the device is connected via DHCP) I would rather redirection to be Interface(en0) based or on MAC address (The best solution since my device has unique MAC address)? Thanks

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • Massive number of context switches on ksoftirqd

    - by Pace
    We have two servers that are grinding to a halt. One is a VM and the other is bare metal. Neither of them are running similar code but they are on the same network. It appears that an incredible number of context switches are arising from ksoftirqd (which is taking up a lot of CPU). vmstat output procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 605092 182496 2637556 0 0 0 0 4177 519187 8 19 73 0 0 2 0 0 605092 182496 2637556 0 0 0 0 4792 520980 8 19 74 0 0 3 0 0 605092 182496 2637552 0 0 0 0 2137 659640 18 26 56 0 0 ... pidstat output TCK4-BM-06A:~ # pidstat -w -I 5 Linux 2.6.32.12-0.7-default (TCK4-BM-06A) 07/02/2012 _x86_64_ 03:03:01 PM PID cswch/s nvcswch/s Command 03:03:06 PM 1 0.20 0.00 init 03:03:06 PM 4 386666.27 0.00 ksoftirqd/0 03:03:06 PM 6 0.60 0.00 ksoftirqd/1 03:03:06 PM 8 378213.17 0.00 ksoftirqd/2 03:03:06 PM 10 0.20 0.00 ksoftirqd/3 03:03:06 PM 12 0.20 0.00 ksoftirqd/4 03:03:06 PM 26 377115.37 0.00 ksoftirqd/11 03:03:06 PM 27 1.80 0.00 events/0 03:03:06 PM 28 1.00 0.00 events/1 03:03:06 PM 29 1.00 0.00 events/2 03:03:06 PM 30 1.00 0.00 events/3 03:03:06 PM 31 0.80 0.00 events/4 03:03:06 PM 32 0.80 0.00 events/5 ... My initial thought is that, since both are on the same network, something is flooding the network. Is this consistent with the data?

    Read the article

  • What are ways to prevent files with the Right-to-Left Override Unicode character in their name (a malware spoofing method) from being written or read?

    - by galacticninja
    What are ways to avoid or prevent files with the RLO (Right-to-Left Override) Unicode character in their name (a malware method to spoof filenames) from being written or read in a Windows PC? More info on the RLO unicode character here: http://www.fileformat.info/info/unicode/char/202e/index.htm http://en.wikipedia.org/wiki/Bi-directional_text Info on the RLO unicode character when used by malware: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html Mirror link: http://webcache.googleusercontent.com/search?q=cache:KasmfOvbVJ8J:www.ipa.jp/security/english/virus/press/201110/E_PR201110.html+&cd=1&hl=en&ct=clnk You can try this RLO character test webpage: http://www.fileformat.info/info/unicode/char/202e/browsertest.htm The RLO character is also already pasted in the 'Input Test' field in that webpage. Try typing there and notice that the characters you're typing are coming out in their reverse orders (right-to-left, instead of left-to-right). In filenames, the RLO character can be specifically positioned in the filename to spoof or masquerade as having a filename or file extension that is different than what it actually has. (Will still be hidden even if 'Hide extensions for known filetypes' is unchecked.) The only info I can find that has info on how to prevent files with the RLO character from being run is from the Information Technology Promotion Agency, Japan website: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html (Mirror link). They adviced to use the Local Security Policy settings manager to block files with the RLO character in its name from being run. Can anyone recommend any other good solutions to prevent files with the RLO character in their names from being written or being read in the computer, or a way to alert the user if a file with the RLO character is detected? My OS is Windows 7, but I'll be looking for solutions for Windows XP, Vista and 7, or a solution that will work for all those OSes, to help people using those OSes too.

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • MySQL is running VERY slow

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info [root@vps ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 1449966 write: 0 other: 207138 total: 1657104 transactions: 103569 (1726.01 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 1449966 (24164.08 per sec.) other operations: 207138 (3452.01 per sec.) Test execution summary: total time: 60.0050s total number of events: 103569 total time taken by event execution: 479.1544 per-request statistics: min: 1.98ms avg: 4.63ms max: 330.73ms approx. 95 percentile: 8.26ms Threads fairness: events (avg/stddev): 12946.1250/381.09 execution time (avg/stddev): 59.8943/0.00 [root@vps ~]# Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]#

    Read the article

  • MySQL is running VERY slow on CentOS 6x (not 5x)

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info Deleted to stay under 30000 characters. Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]# Database size mysql> SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", sum( data_free )/ 1024 / 1024 "Free Space in MB" FROM information_schema.TABLES GROUP BY table_schema; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Free Space in MB | +--------------------+----------------------+------------------+ | bidjunction | 4.68750000 | 0.00000000 | | information_schema | 0.00976563 | 0.00000000 | | mysql | 0.63899899 | 0.00105286 | +--------------------+----------------------+------------------+ 3 rows in set (0.01 sec) mysql> Before Query mysql> SHOW SESSION STATUS like '%Tmp%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_files | 6 | | Created_tmp_tables | 0 | +-------------------------+-------+ 3 rows in set (0.00 sec) mysql> After Query mysql> SHOW SESSION STATUS like '%Tmp%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_files | 6 | | Created_tmp_tables | 2 | +-------------------------+-------+ 3 rows in set (0.00 sec) mysql>

    Read the article

  • Text Expansion Awareness for UX Designers: Points to Consider

    - by ultan o'broin
    Awareness of translated text expansion dynamics is important for enterprise applications UX designers (I am assuming all source text for translation is in English, though apps development can takes place in other natural languages too). This consideration goes beyond the standard 'character multiplication' rule and must take into account the avoidance of other layout tricks that a designer might be tempted to try. Follow these guidelines. For general text expansion, remember the simple rule that the shorter the word is in the English, the longer it will need to be in English. See the examples provided by Richard Ishida of the W3C and you'll get the idea. So, forget the 30 percent or one inch minimum expansion rule of the old Forms days. Unfortunately remembering convoluted text expansion rules, based as a percentage of the US English character count can be tough going. Try these: Up to 10 characters: 100 to 200% 11 to 20 characters: 80 to 100% 21 to 30 characters: 60 to 80% 31 to 50 characters: 40 to 60% 51 to 70 characters: 31 to 40% Over 70 characters: 30% (Source: IBM) So it might be easier to remember a rule that if your English text is less than 20 characters then allow it to double in length (200 percent), and then after that assume an increase by half the length of the text (50%). (Bear in mind that ADF can apply truncation rules on some components in English too). (If your text is stored in a database, developers must make sure the table column widths can accommodate the expansion of your text when translated based on byte size for the translated character and not numbers of characters. Use Unicode. One character does not equal one byte in the multilingual enterprise apps world.) Rely on a graceful transformation of translated text. Let all pages to resize dynamically so the text wraps and flow naturally. ADF pages supports this already. Think websites. Don't hard-code alignments. Use Start and End properties on components and not Left or Right. Don't force alignments of components on the page by using texts of a certain length as spacers. Use proper label positioning and anchoring in ADF components or other technologies. Remember that an increase in text length means an increase in vertical space too when pages are resized. So don't hard-code vertical heights for any text areas. Don't be tempted to manually create text or printed reports this way either. They cannot be translated successfully, and are very difficult to maintain in English. Use XML, HTML, RTF and so on. Check out what Oracle BI Publisher offers. Don't force wrapping by using tricks such as /n or /t characters or HTML BR tags or forced page breaks. Once the text is translated the alignment will be destroyed. The position of the breaking character or tag would need to be moved anyway, or even removed. When creating tables, then use table components. Don't use manually created tables that reply on word length to maintain column and row alignment. For example, don't use codeblock elements in HTML; use the proper table elements instead. Once translated, the alignment of manually formatted tabular data is destroyed. Finally, if there is a space restriction, then don't use made-up acronyms, abbreviations or some form of daft text speak to save space. Besides being incomprehensible in English, they may need full translations of the shortened words, even if they can be figured out. Use approved or industry standard acronyms according to the UX style rules, not as a space-saving device. Restricted Real Estate on Mobile Devices On mobile devices real estate is limited. Using shortened text is fine once it is comprehensible. Users in the mobile space prefer brevity too, as they are on the go, performing three-minute tasks, with no time to read lengthy texts. Using fragments and lightning up on unnecessary articles and getting straight to the point with imperative forms of verbs makes sense both on real estate and user experience grounds.

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Fusion CRM ISV program is gaining weight: Examples of certified add-on's

    - by Richard Lefebvre
    The Fusion CRM ISV program is gaining traction. Please find below few examples of the partners having certified their add-on's to seamlessly work on top of Oracle Fusion CRM. For more information, please contact [email protected] ·         Opportunity-to-Quote.  Big Machines now integrates seamlessly to Oracle Fusion CRM, enabling customers with complex products and services and multiple sales channels to streamline the entire opportunity-to-quote process, including product selection, configuration, pricing, quoting, and approval workflows.  Create a custom hyperlink in the Opportunity to invoke Big Machines CPQ application to create a quote and sync up with the Fusion CRM custom quote object using the CRUD operations. The quote can be updated using the custom button in the custom tab in the opportunity details. See: http://www.bigmachines.com/oracle.php  ·         SaaS Billing and Subscription Management.  Is your prospect/customer asking whether top billing partners support Fusion CRM?  Positioning an integrated CRM solution for billing usage and subscription based services?  Need to implement a billable solution on the Oracle Java Cloud Service?  Aria Systems and Zuora have recently engaged with Oracle to deepen their integrations to Fusion CRM and team with Oracle for joint opportunities.  ·         Google Apps, SharePoint, Email-CRM Integrations o   Do your prospects use Google Apps in their business operations?  A “Best of AppExchange” award winner recently completed their integration for Fusion CRM.  CirrusInsight plugs Fusion CRM web services directly into Gmail, allowing you to search existing opportunity or contact, provide account information, and create an interaction such as phone call, appointment, or email against a customer or contact in Fusion CRM directly from Gmail.  o   An EMEA / France based partner, Aryvart provides bi-directional synchronization of appointments and tasks between Google calendar and Oracle Fusion CRM. For customers, it means adopting Oracle Fusion CRM while continuing to use Google calendar for appointments. o   Looking to lower the barrier and expand in SharePoint accounts?  InFact Group (EMEA / France & Germany) provides Microsoft SharePoint Connector for Oracle Fusion CRM. With this solution, you can store documents attached to an opportunity, into Microsoft SharePoint repository. For customers, it means adopting Oracle Fusion CRM while continuing to collaborate across existing content management infrastructure. o   Need to connect to MacMail, GroupWise, or Outlook/Exchange?  Omni Technology is a partner whose Riva CRM Integration recently engaged for support Fusion CRM as a key platform. Migration Tools from competitive CRMs, to Oracle Fusion CRM.  Data Migration Tools from legacy CRMs, to Oracle Fusion CRM.  A partner with the tools and techniques to speed adoption, Conemis provides data integration tools to export data from legacy CRM, and import into Oracle Fusion CRM via WebServices APIs. For customers, it means reducing cost of data migration from legacy CRM system into Oracle Fusion CRM. 

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • Teeing Off With Chris Leone at OpenWorld 2012

    - by Kathryn Perry
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Chris Leone, Senior Vice President, Oracle Applications Development Monday morning in downtown San Francisco - lots of sunshine, plenty of traffic, and sidewalks chocked full of people with fresh faces and blister free feet. Let the week of Oracle OpenWorld begin! For a great Applications start, Chris Leone packed the house with his Fusion Applications overview session - he covered strategy, scope, roadmaps, and customer successes. Fusion Apps, the world's best SaaS suite, is built on 100 percent standards. Chris talked about its information driven user experience, its innovative design, and the choice of deployment. People can run Fusion in the cloud, in a managed / hosted environment, or on premise -- or they can use a combination of these three models. About seventy percent of our customers go with SaaS. Release 5 of Fusion Apps will become available soon. The cadence of releases will be three times a year. The key drivers are to accelerate business success (no rip and replace) and to simplify business processes. Chris told the audience that organic Fusion is the centerpiece of our cloud solutions, rounded out with acquired offerings such as Taleo Recruiting and RightNow Customer Service. From the cloud solutions, customers can expect real time and predictive BI, social capabilities, choice of deployment, and more productivity because of a next generation UX called FUSE. Chris's demo showed a super easy, new UI that touts self service navigation. We'll blog about FUSE in the very near future. Chris said the next 365 days of Fusion Apps would include more localization, more industries, more power, more mobile, and more configurability. The audience was challenged to think hard about how Fusion could be part of their three-to-five year plans. Chris set up a great opportunity for you to follow up with your customers as they explore the possibilities.

    Read the article

  • Silverlight Cream for February 10, 2011 -- #1045

    - by Dave Campbell
    In this Issue: Mark Monster, Jaime Rodriguez, Mark Hopkins, WindowsPhoneGeek, David Anson, Jesse Liberty, Jeremy Likness, Martin Krüger(-2-), Beth Massi, Joost van Schaik, Laurent Bugnion, and Arik Poznanski. Above the Fold: Silverlight: "Parsing the Visual Tree with LINQ" Jeremy Likness WP7: "Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively" David Anson Lightswitch: "How to Send Automated Appointments from a LightSwitch Application" Beth Massi Shoutouts: Be sure to visit SilverlightShow... check out their top hits last week: SilverlightShow for Jan 31- Feb 06, 2011 Jaime Rodriguez has a post up that all the WP7 folks will be interested in: FAQ about copy paste functionality in upcoming release From SilverlightCream.com: Make use of WCF FaultContracts in Silverlight clients Mark Monster takes a shot at answering “The remote server returned an error: NotFound” while connecting to a WCF Service problem we all see. Communication between HTML in WebBrowser and Silverlight app Jaime Rodriguez responds to questions he received about communication between HTML and SIlverlight with this post about the bi-directional communication between the control and HTML. WP7 - Real Apps, Real Code Mark Hopkins has a post up about some WP7 starter kits that you can get all the source for and actually download the app from the Marketplace first to see if it interests you! WP7 AboutPrompt in depth WindowsPhoneGeek has this cool post up about the AboutPrompt from the Coding4Fun toolkit in detail... great diagrams showing where all the elements are and code examples with images. Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively David Anson describes why he took it upon himself to write his own png encoder for Silverlight... and we all thank him for doing so and providing us with the code! Navigation 101–Cancelling Navigation Jesse Liberty's latest WP7 From Scratch episode is up (number 32), and he's talking about Navigation and how to cancel it if you need to. Parsing the Visual Tree with LINQ Jeremy Likness demonstrates using LINQ to rat out information in the visual tree of your XAML. To Quote Jeremy: "you can easily check for intersections between elements and find any type of element no matter how deep within the tree it is". SpriteAnimationBehavior Martin Krüger has a couple more fun things in the Expression Gallery that I haven't discussed. First up is a behavior that animates up to 999 images and lets you control the FramesPerSecond... great demo on the ExpressionGallery to play with. Second alternative: Storyboard should not start before the Silverlight application is loaded Martin Krüger's latest is a way to programmatically wait for the Loaded event so that you know you can let your animations fly. How to Send Automated Appointments from a LightSwitch Application Beth Massi's latest Lightswitch post follows up her Outlook automation one with sending appointments using the standard iCalendar format... all the code included of course. The case for the Bindable Application Bar for Windows Phone 7 Joost van Schaik posts about a bindable Application Bar for your WP7 apps... grab the code and don't leave home without it :) MVVM Light V4 preview (BL0014) release notes Laurent Bugnion posted an update to MVVMLight to Codeplex a couple days ago. This is an early preview of what he plans on having in version 4, so check out the post for what's new and fun. Search Digg on Windows Phone 7 Arik Poznanski followed up his RSS post from last week with this one on searching Digg on WP7... and he's discussing and providing a utility class for doing it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Podcast Show Notes: Collaborate 10 Wrap-Up - Part 1

    - by Bob Rhubart
    OK, I know last week I promised you a program featuring Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (TUSC) and The Definitive Guide to SOA: Oracle Service Bus author Jeff Davies. But things happen. In this case, what happened was Collaborate 10 in Las Vegas. Prior to the event I asked Oracle ACE Director and OAUG board member Floyd Teter to see if he could round up a couple of people at the event for an impromtu interview over Skype (I was here in Cleveland) to get their impressions of the event. Listen to Part 1 Floyd, armed with his brand new iPad, went above and beyond the call of duty. At the appointed hour, which turned out to be about hour after the close of Collaborate 10,  Floyd had gathered nine other people to join him in a meeting room somewhere in the Mandalay Bay Convention Center. Here’s the entire roster: Floyd Teter - Project Manager at Jet Propulsion Lab, OAUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Mark Rittman - EMEA Technical Director and Co-Founder, Rittman Mead,  ODTUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Chet Justice - OBI Consultant at BI Wizards Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Elke Phelps - Oracle Applications DBA at Humana, OAUG SIG Chair Blog | LinkedIn | Oracle Mix | Book | Oracle ACE Profile Paul Jackson - Oracle Applications DBA at Humana Blog | LinkedIn | Oracle Mix | Book Srini Chavali - Enterprise Database & Tools Leader at Cummins, Inc Blog | LinkedIn | Oracle Mix Dave Ferguson – President, Oracle Applications Users Group LinkedIn | OAUG Profile John King - Owner, King Training Resources Website | LinkedIn | Oracle Mix Gavyn Whyte - Project Portfolio Manager at iFactory Consulting Blog | Twitter | LinkedIn | Oracle Mix John Nicholson - Channels & Alliances at Greenlight Technologies Website | LinkedIn Big thanks to Floyd for assembling the panelists and handling the on-scene MC/hosting duties.  Listen to Part 1 On a technical note, this discussion was conducted over Skype, using Floyd’s iPad, placed in the middle of the table.  During the call the audio was fantastic – the iPad did a remarkable job. Sadly, the Technology Gods were not smiling on me that day. The audio set-up that I tested successfully before the call failed to deliver when we first connected – I could hear the folks in Vegas, but they couldn’t hear me. A frantic, last-minute adjustment appeared to have fixed that problem, and the audio in my headphones from both sides of the conversation was loud and clear.  It wasn’t until I listened to the playback that I realized that something was wrong. So the audio for Vegas side of the discussion has about the same fidelity as a cell phone. It’s listenable, but disappointing when compared to what it sounded like during the discussion. Still, this was a one shot deal, and the roster of panelists and the resulting conversation was too good and too much fun to scrap just because of an unfortunate technical glitch.   Part 2 of this Collaborate 10 Wrap-Up will run next week. After that, it’s back on track with the previously scheduled program. So stay tuned: RSS del.icio.us Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas Technorati Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >