Search Results

Search found 7678 results on 308 pages for 'high caliber'.

Page 13/308 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Easily Create High Converting Landing Pages For SEO

    The term landing page is nothing new, those of us working in PPC or pay-per-click, have been carefully crafting highly specific and targeted landing pages, to achieve a single desire action, for some time. However some of the principles and techniques that can be applied to PPC landing pages can also be applied with equal effect to SEO landing pages.

    Read the article

  • Setting up MySQL Replication for High Availability

    <b>PACKT Publishing: </b>"MySQL Replication has been supported in MySQL for a very long time and is an extremely flexible and powerful technology. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database."

    Read the article

  • Ubuntu 14.04 Slow, High CPU Usage

    - by user273012
    I experienced CPU strange behavior after installing Ubuntu 14.04, at 5 or 10 minutes, CPU usage suddenly increased about 50%, that's really slow my laptop down (Ubuntu 13.10 worked fine by the way) Here is my laptop specs : ASUS A43SA Intel Core i3-2330M 2.20 GHz 4 GB of RAM I noticed this when the first installation, even after I disabled online and file search, even after I switch to GNOME 3, and even after I used GNOME Fallback Session, how to solve this?

    Read the article

  • ubuntu precise high hard drive I/O

    - by pavolzetor
    on ubuntu precise, all apps starts slowly, and my hard drive is all time in use. What is the cause? it was never before, even nautilus takes a lot time to load, boot is also slower. top - 18:37:05 up 1:07, 1 user, load average: 2.03, 2.34, 2.25 Tasks: 182 total, 1 running, 180 sleeping, 0 stopped, 1 zombie Cpu(s): 30.2%us, 7.2%sy, 1.4%ni, 53.9%id, 7.1%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 3941576k total, 3522048k used, 419528k free, 50156k buffers Swap: 0k total, 0k used, 0k free, 1827640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2508 pk 20 0 1018m 189m 28m S 4 4.9 22:28.04 plugin-containe 1290 root 20 0 82336 16m 1492 S 2 0.4 0:04.41 landscape-clien 1305 root 20 0 97280 22m 5584 S 2 0.6 0:01.57 landscape-manag 4201 pk 20 0 17328 1312 924 R 2 0.0 0:00.02 top 1 root 20 0 24436 2364 1268 S 0 0.1 0:00.68 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.45 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.02 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 10 root 20 0 0 0 0 S 0 0.0 0:00.43 ksoftirqd/1 12 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/1 13 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset

    Read the article

  • Achieve a High Ranking With Professional SEO Services

    Business organizations strive hard to make them visible in different search engines like Google, Yahoo!, MSN and many others. The popularity of a website is measured through the ranking it has in such popular search engines. Well, if you want people interested in buying products and services that you sell, you have to make sure that you are visible to them in the first place!

    Read the article

  • Ranking High in the Search Engines Using Back Links

    If I were to say that there was one thing that out ranks every thing else in importance when it comes to search engine optimization, I'd bet that most people would not believe me. There is so much information and advice available on line from sources such as, forums and blogs etc, that in my belief focus on the less important aspects of seo, but in my five years experience as an internet marketer, I can say with certainty that the most significant factor is: Links.

    Read the article

  • A Good Final High School AP Computer Science Programming Project?

    - by user297663
    Hey guys this question might seem very specific but I am in need of some ideas for a project to do for my last month or so in my AP Computer Science class. I've been looking at some college final ideas and a lot of them just seem plain boring. At first I thought about writing a IRC client in JAVA but I wouldn't really be learning anything "new" that would help me in the future. Then I thought about doing IPhone/touch apps (I don't have an adroid phone and I can easily get my hands on an itouch) but I would need ideas to make apps for that. I want to do something that is going to feel trivial and need some explanation but will also help me in the long run learning new concepts in computer science. If you guys could help out I would greatly appreciate it. I really only have a month to do this project so try to keep the project inside of that range. Also, I don't mind learning new languages. Thanks :)

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Using Hidden Markov Model for designing AI mp3 player

    - by Casper Slynge
    Hey guys. Im working on an assignment, where I want to design an AI for a mp3 player. The AI must be trained and designed with the use of a HMM method. The mp3 player shall have the functionality of adapting to its user, by analyzing incoming biological sensor data, and from this data the mp3 player will choose a genre for the next song. Given in the assignment is 14 samples of data: One sample consist of Heart Rate, Respiration, Skin Conductivity, Activity and finally the output genre. Below is the 14 samples of data, just for you to get an impression of what im talking about. Sample HR RSP SC Activity Genre S1 Medium Low High Low Rock S2 High Low Medium High Rock S3 High High Medium Low Classic S4 High Medium Low Medium Classic S5 Medium Medium Low Low Classic S6 Medium Low High High Rock S7 Medium High Medium Low Classic S8 High Medium High Low Rock S9 High High Low Low Classic S10 Medium Medium Medium Low Classic S11 Medium Medium High High Rock S12 Low Medium Medium High Classic S13 Medium High Low Low Classic S14 High Low Medium High Rock My time of work regarding HMM is quite low, so my question to you is if I got the right angle on the assignment. I have three different states for each sensor: Low, Medium, High. Two observations/output symbols: Rock, Classic In my own opinion I see my start probabilities as the weightened factors for either a Low, Medium or High state in the Heart Rate. So the ideal solution for the AI is that it will learn these 14 sets of samples. And when a users sensor input is received, the AI will compare the combination of states for all four sensors, with the already memorized samples. If there exist a matching combination, the AI will choose the genre, and if not it will choose a genre according to the weightened transition probabilities, while simultaniously updating the transition probabilities with the new data. Is this a right approach to take, or am I missing something ? Is there another way to determine the output probability (read about Maximum likelihood estimation by EM, but dont understand the concept)? Best regards, Casper

    Read the article

  • Nginx php-fpm high cpu usage

    - by Piotr Kaluza
    I have a problem with a high traffic wordpress, super high CPU load under nginx php-fpm, I am caching with apc, and memcached, spent 2-3 days tweaking configs and looking for answers it seems to me that php-fpm takes up all the cpu available no matter how many max_children i set if i set 5 then the load is 20% each, if i set 20 then the load adds up till 90% i tried static and dynamic server is 2x3.0Ghz 6GB Ram SSD in raid 10 on ubuntu 12.04 x64 utpime: 17:27:51 up 2:19, 1 user, load average: 29.79, 28.08, 26.29 what can be the issue?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • High CPU usage by 'svchost.exe' and 'coreServiceShell.exe'

    - by kush.impetus
    I am having a laptop running on Windows 7 Ultimate 32-bit. Since past few days, my laptop is facing a serious problem. Whenever I connect to Internet, either svchost.exe or coreServiceShell.exe or both hog the CPU. The coreServiceShell.exe consumes a lot of RAM also. Going into the details, I found that high CPU usage of svchost.exe is caused by Network Location Awareness service. And the high CPU usage of coreServiceShell.exe is caused by Trend Micro Titanium Internet Security 2012. That kind'a makes me think that Trend Micro may be the root of the problem. After further testing, I found that if I use IE or Firefox to browse the Internet, immediately after connecting to Internet, things are normal. See and But if I use Google Chrome, the coreServiceShell.exe hogs both CPU and RAM. At this point, if I disconnect the Internet, the CPU and RAM usage by coreServiceShell.exe continues to be high till I close the Chrome. Also, when I close the Chrome, while Internet is connected, svchost.exe continues to hog CPU but coreServiceShell.exe leaves the race. That makes think that Chrome is the root of the problem, but again, tracing coreServiceShell.exe takes me back to Trend Micro Internet Security. Stopping the Protection by the Trend Micro Internet Security doesn't help either (I am not able to stop its services though). I have updated the Chrome, but no help. I just can't figure out who is the culprit. I can't do without the Google Chrome (of course, by not using it) because of its immensely useful and indispensable features both during browsing and development. Secondly, I can't uninstall the Trend Micro Internet security Suite since it still has few months before it expires and is proving me reliable protection. What could be the cause of the problem and what can I do to resolve this? Thanks in advance

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Server high CPU load issue! ( Cpanel + CentOS 5)

    - by kenby
    Our server cpu load is high todays sometimes reaches to 560! .. We have the lastest Cpanel/whm and the kernel is update!while the load average is : Load Averages: 39.05 75.01 45.33 the apache log is: Current Time: Sunday, 30-Jan-2011 01:50:13 EST Restart Time: Saturday, 29-Jan-2011 21:51:20 EST Parent Server Generation: 2 Server uptime: 3 hours 58 minutes 53 seconds Total accesses: 149493 - Total Traffic: 2.4 GB CPU Usage: u9.17 s10.66 cu42.82 cs0 - .437% CPU load 10.4 requests/sec - 174.6 kB/second - 16.7 kB/request 121 requests currently being processed, 42 idle workers W_WWW.W_..W.W_W_WCWW..W...W.WWW.WWWW.WW.C_W_.W.WW.WC..W.WW.WW .W.W.W...WWWW...WW.CC.C.._W.WC.WW_WW._W....W.WWW.W.WWW.W..W WW.....WW.W_WWWWW..WCRW..WWCW.WWW__.WWWWCW_W._._WW_W...W...W _W..W..WW.W...._W..._WW.W.WWW.._W.WWW.WWW....WW_.C...W._ Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process What cause this high cpu load while the apache cpu load is fine? the mysql process is also fine.. the cpu load is still high even if I stop mail-http-mysql services!

    Read the article

  • HIGH CPU usage by PHP on a VPS Magento Server

    - by Anil
    My server running magento is 4gb ram and 4 core cpu. But still i am struggling with the high CPU usage. I only have 10 visitors at any given point of time. I am not sure if the PHP has to take this high % CPU usage. Attached is the TOP result. top - 09:18:32 up 2 days, 15:44, 1 user, load average: 1.16, 2.02, 1.99 Tasks: 179 total, 2 running, 177 sleeping, 0 stopped, 0 zombie Cpu(s): 46.7%us, 3.9%sy, 0.1%ni, 46.9%id, 1.0%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 3919972k total, 3164968k used, 755004k free, 530820k buffers Swap: 1048568k total, 379352k used, 669216k free, 1536388k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15897 vpsadmin 20 0 431m 168m 54m R 91.7 4.4 2:16.16 php-cgi 12308 vpsadmin 20 0 404m 163m 73m S 29.3 4.3 15:15.90 php-cgi 3644 mysql 20 0 1528m 80m 4944 S 9.8 2.1 1899:58 mysqld 4969 apache 20 0 471m 6228 2824 S 2.0 0.2 0:18.53 httpd 16148 root 20 0 15024 1220 864 R 2.0 0.0 0:00.01 top 1 root 20 0 19364 1064 844 S 0.0 0.0 0:02.50 init

    Read the article

  • High Memory Utilization on weblogic

    - by Anup
    My Weblogic App Server shows high Memory Utization. The application seems to perform good without any memeory issues. Now that my traffix is going to increase i am worried about the memory and have a feeling things could go bad and need to take action now and i am confused as to should i increase The JVM memory on the weblogic instances which means adding more physical memory or should i increase the number of managed instances in the cluster. Would like to understand what does having high memory utilization mean and the advantage and disadvantages of adding JVM memory and adding managed instances. Thanks

    Read the article

  • High Memory Utilization on weblogic

    - by Anup
    My Weblogic App Server shows high Memory Utization. The application seems to perform good without any memeory issues. Now that my traffix is going to increase i am worried about the memory and have a feeling things could go bad and need to take action now and i am confused as to should i increase The JVM memory on the weblogic instances which means adding more physical memory or should i increase the number of managed instances in the cluster. Would like to understand what does having high memory utilization mean and the advantage and disadvantages of adding JVM memory and adding managed instances. Thanks

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >