Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 34/835 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Too many apache processes, killing the CPU

    - by RULE101
    I am noticed that too many apache processes killing the CPU in my dedicated server. 14193 (Trace) (Kill) nobody 0 66.1 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14128 (Trace) (Kill) nobody 0 65.9 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14136 (Trace) (Kill) nobody 0 65.9 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14129 (Trace) (Kill) nobody 0 65.8 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13419 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13421 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13426 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13428 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 13429 (Trace) (Kill) nobody 0 65.7 0.0 /usr/local/apache/bin/httpd -k start -DSSL 12173 (Trace) (Kill) nobody 0 65.5 0.0 /usr/local/apache/bin/httpd -k start -DSSL 14073 (Trace) (Kill) nobody 0 65.5 0.0 /usr/local/apache/bin/httpd -k start -DSSL I am getting high load email notification from cpanel during the day. FROM httpd.conf Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" LoadModule bwlimited_module modules/mod_bwlimited.so LoadModule h264_streaming_module /usr/local/apache/modules/mod_h264_streaming.so AddHandler h264-streaming.extensions .mp4 Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" ErrorLog "logs/error_log" ScriptAliasMatch ^/?controlpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?cpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?kpanel/?$ /usr/local/cpanel/cgi-sys/redirect.cgi ScriptAliasMatch ^/?securecontrolpanel/?$ /usr/local/cpanel/cgi-sys/sredirect.cgi ScriptAliasMatch ^/?securecpanel/?$ /usr/local/cpanel/cgi-sys/sredirect.cgi ScriptAliasMatch ^/?securewhm/?$ /usr/local/cpanel/cgi-sys/swhmredirect.cgi ScriptAliasMatch ^/?webmail/?$ /usr/local/cpanel/cgi-sys/wredirect.cgi ScriptAliasMatch ^/?whm/?$ /usr/local/cpanel/cgi-sys/whmredirect.cgi RewriteEngine on AddType text/html .shtml Alias /akopia /usr/local/cpanel/3rdparty/interchange/share/akopia/ Alias /bandwidth /usr/local/bandmin/htdocs/ Alias /img-sys /usr/local/cpanel/img-sys/ Alias /interchange /usr/local/cpanel/3rdparty/interchange/share/interchange/ Alias /interchange-5 /usr/local/cpanel/3rdparty/interchange/share/interchange-5/ Alias /java-sys /usr/local/cpanel/java-sys/ Alias /mailman/archives /usr/local/cpanel/3rdparty/mailman/archives/public/ Alias /pipermail /usr/local/cpanel/3rdparty/mailman/archives/public/ Alias /sys_cpanel /usr/local/cpanel/sys_cpanel/ ScriptAlias /cgi-sys /usr/local/cpanel/cgi-sys/ ScriptAlias /mailman /usr/local/cpanel/3rdparty/mailman/cgi-bin/ <Directory "/"> AllowOverride All Options All </Directory> <Directory "/usr/local/apache/htdocs"> Options All AllowOverride None Require all granted </Directory> <Files ~ "^error_log$"> Order allow,deny Deny from all Satisfy All </Files> <Files ".ht*"> Require all denied </Files> <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "logs/access_log" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/usr/local/apache/cgi-bin/" </IfModule> <Directory "/usr/local/apache/cgi-bin"> AllowOverride None Options All Require all granted </Directory> <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> <IfModule prefork.c> Mutex default mpm-accept </IfModule> <IfModule mod_log_config.c> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common </IfModule> <IfModule worker.c> Mutex default mpm-accept </IfModule> # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the # # configuration file. To have modifications retained, all modifications must be checked into the # # configuration system by running: # # /usr/local/cpanel/bin/apache_conf_distiller --update # # To see if your changes will be conserved, regenerate the Apache configuration file by running: # # /usr/local/cpanel/bin/build_apache_conf # # and check the configuration file for your alterations. If your changes have been ignored, then they will # # need to be added directly to their respective template files. # # # # It is also possible to add custom directives to the various "Include" files loaded by this httpd.conf # # For detailed instructions on using Include files and the apache_conf_distiller with the new configuration # # system refer to the documentation at: http://www.cpanel.net/support/docs/ea/ea3/customdirectives.html # # # # This configuration file was built from the following templates: # # /var/cpanel/templates/apache2/main.default # # /var/cpanel/templates/apache2/main.local # # /var/cpanel/templates/apache2/vhost.default # # /var/cpanel/templates/apache2/vhost.local # # /var/cpanel/templates/apache2/ssl_vhost.default # # /var/cpanel/templates/apache2/ssl_vhost.local # # # # Templates with the '.local' extension will be preferred over templates with the '.default' extension. # # The only template updated by the apache_conf_distiller is main.default. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # PidFile logs/httpd.pid # Defined in /var/cpanel/cpanel.config: apache_port Listen 0.0.0.0:80 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.powerlabel.net LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 300 ServerSignature On <IfModule prefork.c> </IfModule> RewriteEngine on RewriteMap LeechProtect prg:/usr/local/cpanel/bin/leechprotect Mutex file:/usr/local/apache/logs rewrite-map <IfModule !mod_ruid2.c> UserDir public_html </IfModule> <IfModule mod_ruid2.c> UserDir disabled </IfModule> # DirectoryIndex is set via the WHM -> Service Configuration -> Apache Setup -> DirectoryIndex Priority DirectoryIndex index.html.var index.htm index.html index.shtml index.xhtml index.wml index.perl index.pl index.plx index.ppl index.cgi index.jsp index.js index.jp index.php4 index.php3 index.php index.phtml default.htm default.html home.htm index.php5 Default.html Default.htm home.html # SSLCipherSuite can be set in WHM under 'Apache Global Configuration' SSLPassPhraseDialog builtin SSLUseStapling on SSLStaplingCache shmcb:/usr/local/apache/logs/stapling_cache_shmcb(256000) SSLSessionCache shmcb:/usr/local/apache/logs/ssl_gcache_data_shmcb(1024000) SSLSessionCacheTimeout 300 Mutex file:/usr/local/apache/logs ssl-cache SSLRandomSeed startup builtin SSLRandomSeed connect builtin # Defined in /var/cpanel/cpanel.config: apache_ssl_port Listen 0.0.0.0:443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler cgi-script .cgi .pl .plx .ppl .perl AddHandler server-parsed .shtml AddType text/html .shtml AddType application/x-tar .tgz AddType text/vnd.wap.wml .wml AddType image/vnd.wap.wbmp .wbmp AddType text/vnd.wap.wmlscript .wmls AddType application/vnd.wap.wmlc .wmlc AddType application/vnd.wap.wmlscriptc .wmlsc <Location /whm-server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> # SUEXEC is supported Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" What can cause this and how can i fix it ?

    Read the article

  • how to fetch the website code in my local machine

    - by vipin8169
    i have a local GIT repository in my system by name 'git_repo' under which i had the whole codebase for a website(pre-configured by someone else), including all the jsps, js, css etc. I used the following commands to create the local git repository out of the main repository: git branch //to show the current branch git checkout -b branch_local_name origin/Main_branch_name //to create local repository in current branch git fetch //to fetch the current build Accidently, i deleted all the contents of the local folder and i don't know what to do fetch the contents of that website again. Please help !!!

    Read the article

  • Localising Your SEO - How to Do This Successfully

    Getting to the top of the search engines for some businesses and niches can be easier than others, but sometimes it is better to concentrate your Search Engine Optimisation efforts on local searches, especially if your products are more suited to a local base. Not only is this easier than going for the national searches, it means your conversion is going to usually be higher due to the fact that local people generally prefer to purchase or use services from local businesses.

    Read the article

  • When using Bundler and Rails 2.3.5 I get uninitialized constant SubdomainFu when migrating

    - by user347480
    Hi I'm using bundler with rails 2.3.5 and I'm trying to make sure everything is working correctly but when I do a "rake db:migrate --trace" I get this ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! uninitialized constant SubdomainFu /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:inload_missing_constant' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in const_missing' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:inconst_missing' /Users/node/Projects/Race-RX/config/initializers/subdomain_config.rb:1 /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in load_without_new_constant_marking' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in new_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:inload' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:622:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:ineach' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:621:in load_application_initializers' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:176:inprocess' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in send' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:inrun' /Users/node/Projects/Race-RX/config/environment.rb:9 /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:innew_constants_in' /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' /opt/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/misc.rake:4 /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:incall' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in execute' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:607:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:604:in invoke_prerequisites' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:596:ininvoke_with_call_chain' /opt/local/lib/ruby/1.8/monitor.rb:242:in synchronize' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in invoke' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:ininvoke_task' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:ineach' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in top_level' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:inrun' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /opt/local/bin/rake:19:in load' /opt/local/bin/rake:19 I don't know what could be causing this. I did however but my require "rubygems" require "bundler" Bundler.setup in my enviroment.rb file but that doesn't see to be the problem.

    Read the article

  • create new subdomain or buy new domain? seo costs [migrated]

    - by greek_no_money
    If I am targeting the same audience, but the new sub-site has different concept from the existing site, should I create a new sub-domain or create a new domain? What are the seo advantages and disadvantages of creating a new sub-domain? For example Stack Exchange with Alexa rank 1.474 right now, has sub-domains such as programmers.stackexchange.com and other domains such us serverfault.com with rank 3.515 and stackoverflow.com with rank 109. So why didn't Stack Exchange put, for example, Stack Overflow in a sub-domain to create a better ranking?

    Read the article

  • Does Wicket hamper SEO or search engines ability to crawl?

    - by Nick
    We're coming from GWT projects and because of problems with SEO not liking GWT for our next project we're going to move clear of GWT (mainly because seo is a high priority for this next project). In choosing a new framework, I'm looking at Wicket and liking what I've seen so far. I've only done a few tutorials, but in looking at the war layout (from these tutorials) it looks like most of the html pages are in the WEB-INF folder. It this going to cause problems for SEO and search engines crawling through the sites files? Ideally, I'd like to use Wicket with some AJAX and deploy to Google App Engine.

    Read the article

  • Used SQL Svr 2008 Config Manager to Set Service Account to Local System: What Did It Change?

    - by Frank Ramage
    Direct shot to foot moment... While setting-up individual non-admin accts for MSSQLSERVER services, I temporarily set Server service login to Local System account. I remembered later that: SQL Server Configuration Manager performs additional configuration such as setting permissions in the Windows Registry so that the new account can read the SQL Server settings. I want my Local System back . (Actually just restored to its original security profile) Any advice? Thanks!

    Read the article

  • rc.local is not always executed upon boot

    - by starcorn
    Hey, I have some weird problem with the rc.local file which is located in /etc/rc.local the thing is that it is not always running when I boot up the laptop. Maybe every second time, I haven't counted. Anyway when that happens I have to manually go to terminal and type sudo /etc/init.d/rc.local start, which kinda kills the purpose of having this script. Anyone know what the problem could be? EDIT Since this wasn't obvious. This is an issue where I make a fresh boot up. Which mean I have shut down the computer. And next time when I boot up the computer, the rc.local file is randomly deciding whether it will automatically start or not. Here's a copy of what my rc.local file contains echo -n 255 > /sys/devices/platform/i8042/serio1/serio2/sensitivity echo level 2 > /proc/acpi/ibm/fan touch /home/starcorn/Desktop/foo rfkill block bluetooth exit 0

    Read the article

  • Syncing objects to a remote server, and caching on local storage

    - by Harry
    What's the best method of sycing objects (as JSON) to a remote server, with local caching? I have some objects that will pretty much just be plain-text with some extra meta-data. I was thinking of perhaps including a "last modified date" for both Local storage and Remote storage. This could then be used to determine which object is the most recent. For example, even though objects will be saved to both local and remote when they are saved, sometimes the user may not have internet access, or the server may be down, or any other number of things. In this case, the last modified date for remote storage would be reverted to its previous date. Local storage would remain as it is. At this point, the user could exit the application, and when they reload the application would then look at the last modified dates of the local and remote storages, and decide. Is there anything I'm missing with this? Is there a better method that I could use?

    Read the article

  • How to make XAMPP virtual hosts accessible to VM's and other computers on LAN?

    - by martin's
    XAMPP running on Vista 64 Ultimate dev machine (don't think it matters). Machine / Browser configuration Safari, Firefox, Chrome and IE9 on dev machine IE7 and IE8 on separate XP Pro VM's (VMWare on dev machine) IE10 and Chrome on Windows 8 VM (VMware on dev machine) Safari, Firefox and Chrome running on a iMac (same network as dev) Safari, Firefox and Chrome running on a couple of Mac Pro's (same network as dev) IE7, IE8, IE9 running on other PC's on the same network as dev machine Development Configuration Multiple virtual hosts for different projects .local fake TLD for development No firewall restrictions on dev machine for Apache Some sites have .htaccess mapping www to non-www Port 80 is open in the dev machine's firewall Problem XAMPP local home page (http://192.168.1.98/xampp/) can be accessed from everywhere, real or virtual, by IP All .local sites can be accessed from the browsers on the dev machine. All .local sites can be accessed form the browsers in the XP VM's. Some .local sites cannot be accessed from IE10 or Chrome on the W8 VM Sites that cannot be accessed from W8 VM have a minimal .htaccess file No .local sites can be accessed from ANY machine (PC or Mac) on the LAN hosts on dev machine (relevant excerpt) 127.0.0.1 site1.local 127.0.0.1 site2.local 127.0.0.1 site3.local 127.0.0.1 site4.local 127.0.0.1 site5.local 127.0.0.1 site6.local 127.0.0.1 site7.local 127.0.0.1 site8.local 127.0.0.1 site9.local 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local httpd-vhosts.conf on dev machine (relevant excerpt) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost ServerAlias localhost *.localhost.* DocumentRoot D:/xampp/htdocs </VirtualHost> # ======================================== site1.local <VirtualHost *:80> ServerName site1.local ServerAlias site1.local *.site1.local DocumentRoot D:/xampp-sites/site1/public_html ErrorLog D:/xampp-sites/site1/logs/access.log CustomLog D:/xampp-sites/site1/logs/error.log combined <Directory D:/xampp-sites/site1> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> </VirtualHost> NOTE: The above <VirtualHost *:80> block is repeated for each of the nine virtual hosts in the file, no sense in posting it here. hosts on all VM's and physical machines on the network (relevant excerpt) 127.0.0.1 localhost ::1 localhost 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local None of the VM's have any firewall blocks on http traffic. They can reach any site on the real Internet. The same is true of the real machines on the network. The biggest puzzle perhaps is that the W8 VM actually DOES reach some of the virtual hosts. It does NOT reach site2, site6 and site 9, all of which have this minimal .htaccess file. .htaccess file <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </IfModule> Adding this file to any of the virtual hosts that do work on the W8 VM will break the site (only for W8 VM, not the XP VM's) and require a cache flush on the W8 VM before it will see the site again after deleting the file. Regardless of whether a .htaccess file exists or not, no machine on the same LAN can access anything other than the XAMPP home page via IP. Even with hosts files on all machines. I can ping any virtual host from any machine on the network and get a response from the correct IP address. I can't see anything in out Netgear router that might prevent one machine from reaching the other. Besides, once the local hosts file resolves to an ip address that's all that goes out onto the local network. I've gone through an extensive number of posts on both SO and as the result of Google searches. I can't say that I have found anything definitive anywhere.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • Setup Guide for updating local system and the repository with the incremental Solaris 11.1 SRU

    - by Gurubalan
    This guide covers the steps to implement the following setup. I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5II. Setting up local system as an IPS Repository Server (HTTP interface)III. Updating the local repository with the incremental Solaris 11.1 SRU 16.5I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5We assume that the local system is currently installed with Solaris 11.1 GA and the system doesn't have internet connectivity.What I have:1. Two parts of full repo iso files downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html. Both files are concatenated to a single file using the following command. $ cat sol-11_1-repo-full.iso-a sol-11_1-repo-full.iso-b > sol-11_1-repo-full.iso I suggest to verify the downloaded file against its md5checksum value [http://download.oracle.com/otn/solaris/11_1/md5sum.txt] using the following command digest -a md5 <file-name>  // the output of this command should match the original checksum value for that file.2. Incremental repo sol-11_1_16_5_0-incr-repo.iso downloaded from MOS [Patch 18269379: ORACLE SOLARIS 11.1.16.5.0 REPO ISO IMAGE (SPARC/X86 (64-BIT)]. You can get the checksum value of incremental repo iso by clicking the check box "show digest details" when you download the file.3. The local system IP is 192.168.10.10 & port 81 is reserved for repo serverPlease note that this repo file (either full or incremental) is common for both SPARC and X86(64BIT).Steps to update the local system: 1. #mounting s11.1 full repo iso to mnt        $ mount -F hsfs /soft/sol-11_1-repo-full.iso /mnt 2. Setting the pkg publisher to full repo source         $ pkg set-publisher -g file:///mnt/repo solaris 3. Perform the update of the packages.        $ pkg updateII. Setting up local system (Oracle Solaris 11.1) as an IPS Repository Server(HTTP interface):Please note that we have already mounted the full repo iso at /mnt    1. # copying /mnt permanently to the disk location at /s11.1        #zfs create -o atime=off -o mountpoint=/s11.1 rpool/s11.1        #rsync -aP /mnt/* /s11.1     2. #unmounting mnt         #umount /mnt3. To allow clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.        svccfg -s application/pkg/server setprop pkg/inst_root=<data_source>/repo        eg: $svccfg -s application/pkg/server setprop pkg/inst_root=/s11.1/repo4. Setting port# to 81      svccfg -s application/pkg/server setprop pkg/port=<port_number>      eg: svccfg -s application/pkg/server setprop pkg/port="81"5a. Enable the pkg/server service (if the service is disabled)     $svcs pkg/server     STATE          STIME    FMRI     disabled        19:55:03 svc:/application/pkg/server:default      $svcadm enable pkg/server5b. Refresh/Restart the service, if it is already online       $svcadm refresh application/pkg/server       $svcadm restart application/pkg/server6. Setting pkg publisher on repo server and repo clients:      pkg set-publisher -G '*' -g http://<ip>:<port> solaris      eg: $pkg set-publisher -G '*' -g 'http://192.168.10.10:81' solaris7. Verify the Solaris 11.1 version from the repository         $pkgrepo list -s http://192.168.10.10:81 | grep entire         solaris   entire     0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z You will have multiple row entries if the repository is setup with incremental SRUs.III. Updating the local repository with the incremental Solaris 11.1 SRU 16.51. #mounting s11.1 incremental SRU repo iso to mnt        $ mount -F hsfs <full_path_to>/sol-11_1_sruN_bldnum_respinnum-incr-repo.iso  /mnt        $ mount -F hsfs /soft/sol-11_1_16_5_0-incr-repo.iso /mnt2. Updating the local repository        $pkgrecv -s  /mnt/repo -d /s11.1/repo '*'3. Building a Search Index    $pkgrepo -s /s11.1/repo refresh     Initiating repository refresh.4. Refresh/Restart the service       $svcadm refresh svc:/application/pkg/server       $svcadm restart svc:/application/pkg/server5. Verify the repo has the incremental SRU as well.       # pkgrepo list -s http://192.168.10.10:81 | grep entire        solaris   entire      0.5.11,5.11-0.175.1.16.0.5.0:20140218T165248Z       solaris   entire      0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z

    Read the article

  • Google I/O 2010 - SEO site advice from the experts

    Google I/O 2010 - SEO site advice from the experts Google I/O 2010 - SEO site advice from the experts Tech Talks Matt Cutts, Greg Grothaus, Evan Roseman A perfect opportunity to get your website reviewed by the experts in the Google Search Quality team. Attendees can get concrete search engine optimization (SEO) feedback on their own sites. We'll also answer real-life questions that affect developers when it comes to optimizing their websites for search. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 308 12 ratings Time: 01:00:38 More in Science & Technology

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • URL good practice for category sub category?

    - by Seting
    I have developed a application and I need to work for SEO-friendly URL. I have following URL structure: http://localhost:3000/posts/product/testing-with-slug-url-2 and http://localhost:3000/posts/product/testing-with-slug-url-2-4-23 Is this a good practice? If not how can I rewrite it? Ok Ill explain about my applicaiton. My application is based on shopping. if a customer searched for mobiles. it will redirect to url like this http://mydomain.com/cat/mobile-3 3 in the url indicates my database id it is used for further searching After the user reached the mobile page he may need to filter for some brand eg. nokia so my url look lik http://mydomain.com/subcat/nokia-3-2 The integer at the end refers to 3 category id and 2 the brand id My doubt is whether the integer at the end of the url will affect seo ranking.

    Read the article

  • Using subdomains or directories for main categories?

    - by Matthieu
    I have a website which references places to travel to in the world. Those places are (of course) grouped by countries. Here is an example of an actual URL of my website: http://awespot.org/country/105/iceland I am wondering if it would be better, in terms of SEO, to have the countries separated in subdomains: http://iceland.awespot.org/ I know subdomains are considered by Google as different websites, so I am considering the 2 options: separating would mean also separating the pagerank and benefits of links to the website but separating would enable me to create a "web" or related websites (all related to travel and all) that link and benefit to each other I am only asking about SEO here, I know there are other questions raised by this possibility (user experience, administration... even password completion by web browsers)

    Read the article

  • Can third party content on sub-domains harm the main site's search rankings?

    - by dror
    I have a site that is a "portal" or "directory" for service providers. We opened every service provider's own page on our site, but now we get a lot of applications from those providers that want sites from their own. We want to make a full site for every service provider, but rather put them on sub domain URLs. (They don’t mind, it's OK for them.) So, my site is www.exaple.com Their site will be: provider.example.com Now I have two questions: Can the content on the provider sites harm my site in SEO? If one from those sub domains is punished by Google because the owner does "black hat SEO", how it will affect the rood domain? Can it make the root domain get punished?

    Read the article

  • Short brandable domain vs. long keyword domain

    - by bajki
    I need a counsel about my websites' domain. Let's say I have just bought two domains: xyz.com and xyzphotography.com. As we can see, the first one is shorter but way less meaningful than the second one. The second one contains a photography keyword but it's longer. I have only one website which is my photography portfolio. Can you advise me how to set those domains to make it all as SEO-friendly as possible? Which one should be the main one, does the rest should work on their own or have some kind of SEO-friendly redirect to the main domain?

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • how to link a c++ object to a local variable in Lua

    - by MahanGM
    I'm completing my scripting interface with Lua, but recently I've stuck at some point. I have several functions for my Entitiy events like Update(). I have a function called create_entitiy() which instantiate a new entity from a given entity index: function Update() local bullet = create_entity(0, 0, "obj_bullet") end create_entity returns a table which is the properties of the created entity. Now how can I make a connection between bullet variable and my newly created object? Right now for previously added objects to the scene, I simply set a global table for each of them and then after every call to Update(), I go through registered names to find object tables and perform new changes. Like the one below: function Update() if keyboard_key_press(vk_right) then obj_player.x += 3 end I can get obj_player table because I know its name from C++, plus I can get it as a global table and simply reach for the first instance named obj_player. Is there any solution for me to make bullet variable act like this? I was thinking to get all local variables in Update() function and check for every one to see if is it table and it has an unique field attached to it like id, this way I can determine that this is an object table and do the rest of the process. By the way, is this interface going to work easier with luaBind if I implement it? Bottom line: How can I make a local variable in Lua that receives a table from create_entity function and track that local variable to capture it from C++. e. g. function Update() local bullet = create_entity(0, 0, "obj_bullet") bullet.x = 10 <== Commit a change in table end Now I want to get variable bullet from C++. And it's not just this variable, there might be a ton of these local variables with different names.

    Read the article

  • Sub domain on root domain

    - by dror
    I have a site, actually a "portal"/ "directory" for service providers. Now, for start, we opened every service provider own page on our site, but now we get a lot of applications from those providers that thy want sites from their own. We want to make every service provider his own site, but on sub domain url. ( they don’t mind… its ok for them) So, my site is www.exaple.com There site will be: provider.exaple.com Now I have two questions: can it harm my site in SEO? if one from those sub domain , punished by Google because is owner do "black hat seo" , how it will affect the rood domain? It can make the root domain to get punished?

    Read the article

  • LTS 12.04.1 will not resolve domain.local websites

    - by user108502
    I have done a brand new installation of the Ubuntu server (v12.10) with bind configured to have a dns zone of gdos.local and apache configured for said domain. With a brand new installation of Ubuntu desktop LTS I try to connect to www.gdos.local and all I get is: Server not found Firefox can't find the server at www.gdos.local. Check the address for typing errors such as ww.example.com instead of www.example.com However if I change the domain to gdos.tmp and type in www.gdos.tmp, I get the internal website. If I change it to mybusiness.local , I get the same error message. If I use a Microsoft os, this works fine, all three domains resolve to a webpage. I have searched the internet flat for the past week on dns issues but have not come up with a solution. I have followed instructions from removing dnsmasq to editing like resolv.conf (in some very strange places) and I still have no joy on getting the .local domain extension to work. I can safely say the issue is not with the server but with the desktops because if the issue was server related the Microsoft OS's would not resolve it either. I have done several installs of the desktop in an effort to make sure that I did not break anything while trying to fix this. Please can anyone point to a workable solution for fixing the .local domain extension. Thank you Mark Hollander

    Read the article

  • Convenient practice for where to place images?

    - by Baumr
    A lot of developers place all image files inside a central directory, for example: /i/img/ /images/ /img/ Isn't it better (e.g. content architecture, on-page SEO, code maintainability, filename maintainability, etc.) to place them inside the relevant directories in which they are used? For example: example.com/logo.jpg example.com/about/photo-of-me.jpg example.com/contact/map.png example.com/products/category1-square.png example.com/products/category2-square.png example.com/products/category1/product1-thumb.jpg example.com/products/category1/product2-thumb.jpg example.com/products/category1/product1/product1-large.jpg example.com/products/category1/product1/product2-large.jpg example.com/products/category1/product1/product3-large.jpg What is the best practice here regarding all possible considerations (for static non-CMS websites)? N.B. The names product1-large and product1-thumb are just examples in this context to illustrate what kind of images they are. It is advised to use descriptive filenames for SEO benefit.

    Read the article

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Moving one site in Webmaster Tools to more than one site [closed]

    - by Towhid
    Possible Duplicate: How should I structure my urls for both SEO and localization? I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >