Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 39/305 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Why is this setting for Name-based Virtual Host settings not working?

    - by Kave
    I have two domains (siteA.com & SiteB.com) that point to the same webserver and I would like to show different web pages for each. The steps I have taken so far are: Copy the default site (siteA) to siteB 1) sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/siteB 2) sudo vim /etc/apache2/sites-available/siteB <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/siteB <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/siteB> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo Indexes Order allow,deny allow from all </Directory> </VirtualHost *:80> Then I created under /var/www/siteB and created a sample index.html in there. However when I load my domain siteB.com I still get directed to /var/www/siteA. Why is that? Do I have to rename the /etc/apache2/sites-available/default to /etc/apache2/sites-available/siteA as well? UPDATE: Thanks to the answer below it seems I had forgotten next to enabling the site also another entry: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias www.siteB.com </VirtualHost *:80> in order to include all subdomains as well then do: <VirtualHost *:80> ServerAdmin [email protected] ServerName siteB.com ServerAlias *.siteB.com </VirtualHost *:80> Same goes for siteA.

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • [Nameservers] Can I setup 5 ips to 1 server and have only 2 nameservers?

    - by Tyler
    So I have 1 computer with 5 IP's and around 15 sites being hosted. I have 4 of the ips setup to be dedicated for 4 sites and the rest share the 5th ip. When I'm setting up my name server, do I set it up at Godaddy my Registar or on my server's dns or both? Can I just setup NS1 - Add all the ips NS2 - Add all the ips And just have all the sites use those two name servers?

    Read the article

  • Understanding Authorized Access to your Google Account

    - by firebush
    I'm having trouble understanding what I'm am granting to sites when they have "Authorized Access to my Google Account." This is how I see what has authorized access: Log into gmail. Click on the link that is my name in the upper-right corner, and from the drop-down select Account. From the list of links to the left, select Security. Click on Edit next to Authorized applications and sites. Authenticate again. At the top of the page, I see a set of sites that have authorized access to my account in various ways. I'm having trouble finding out information about what is being told to me here. There's no "help" link anywhere on the page and my Google searches are coming up unproductive. From the looks of what I see there, Google has access to my Google calendar. I feel comfortable about that, I think. But other sites have authorization to "Sign in using your Google account". My question is, what exactly does that authorization mean? What do the sites that have authorization to "Sign in using my Google account" have the power to do? I hope that this simply means that they authorize using the same criterion that gmail does. I assume that this doesn't grant them the ability to access my email. Can someone please calm my paranoia by describing (or simply pointing me to a site that describes) what these terms mean exactly? Also, if you have any thoughts about the safety of this feature, please share. Thanks!

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Is there a way to make IETab always open a site in Internet Explorer?

    - by reg
    Hello, IETab offers a setting that lets you define sites that should always be opened using IE's rendering engine. This is great and I use it all the time. There are a few sites, though, with which I encounter problems when I access them this way (needless to say that these have some IE-only features, otherwise I would simply use Firefox without IETab). So my question is: is there a way to make IETab (or any other extension that does the same thing) automatically open a specific list of sites in IE? To be clear: I want a way to specify a list of sites that will open an external IE process from Firefox, when clicking on those links or bookmarks from within Firefox.

    Read the article

  • Virtualhost one https site, the rest http

    - by RJP1
    I have a linode server with Apache2 running a handful of sites with virtualhosting. All sites work fine on port 80, but one site has a ssl certificate and also runs okay. My problem is as follows: The non-https sites, if visiting https://domain.com - show the contents of the only secure site... Is there a way of disabling the *:443 match for these non-secure sites? Thanks! EDIT (more information): Here's a typical config in sites-available for a normal insecure http site: <VirtualHost *:80> ServerName www.insecure.com ServerAlias insecure.com ... </VirtualHost> The secure https site is as follows: <VirtualHost *:80> ServerName www.secure.com Redirect permanent / https://secure.com/ </VirtualHost> <VirtualHost *:80> ServerName secure.com RedirectMatch permanent ^/(.*) https://secure.com/$1 </VirtualHost> <VirtualHost *:443> SSLEngine on SSLProtocol all SSLCertificateChainFile ... SSLCertificateFile ... SSLCertificateKeyFile ... SSLCACertificateFile ... ServerName secure.com ServerAlias secure.com ... </VirtualHost> So, visiting: http:/insecure.com - works http:/www.insecure.com - works http:/secure.com - redirects to https:/secure.com - works http:/www.secure.com - redirects to https:/secure.com - works https:/insecure.com - shows https:/secure.com - WRONG!

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • UNC access to TFS SharePoint doesn't work

    - by RobSiklos
    We are using TFS2010 with the SharePoint document portal. We are trying to access the files in SharePoint using UNC paths (e.g. \tfs2010\sites\DefaultCollection\MyProject) and it just plain doesn't work. From my workstation, I actually get different behaviours depending on the path. Case 1: Path = \tfs2010\sites\DefaultCollection\MyProject\ Result: windows explorer reports a network error "Windows cannot access \tfs2010\sites\DefaultCollection\MyProject\" Case 2: Path = \tfs2010.mycompany.com\sites\DefaultCollection\MyProject\ Result: Windows Security dialog pops up asking for my username and password. I tried entering all combinations of my windows username and password (with and without the domain before the username), but no matter what, my credentials are not accepted. I have no problems accessing the SharePoint site through the web portal - it's just UNC which doesn't work. There doesn't seem to be anything relevant in the event viewer on the server. Anyone have any idea what could be the problem?

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Monitoring Active Directory (AD) Replication in Windows Server 2008 R2

    - by Kyle Brandt
    With Active Directory, what is a good way to monitor replication? I have multiple sites and multiple locations, so ideally both replication between sites and within sites would be monitored. I'm not really sure if each DC needs to be monitored, each NTDS connection, or each DC * Each NTDS connection. For the purposes of fitting into a standard alerting methodology, perfmon counters that would allow me to alert if replication was behind X minutes seems like it might be ideal.

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • subversion issue on mac os x

    - by user32942
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Multiple stores for the same niche

    - by pandronic
    I started developing a new niche of products in my country about 3 years ago. That's when I opened my first store. Everything went fine, until a year ago, when someone I thought was a friend secretly stole my idea and made his own competing store. I was pretty upset when I caught him and decided to make it as difficult as possible for him, so I made another 4 stores, trying to get him as low as possible in the search results. The new sites have similar products (although not 100% identical), slightly different titles, images and prices. They look different and are built on different e-commerce platforms. They are all hosted on the same server, have roughly the same backlinks, use the same Google account for Analytics, have the same support phone numbers etc etc. I wasn't thinking that I'm doing something fishy, so I didn't try to hide anything. Trouble is that those sites, after doing fine for a few months, dropped like bricks in search results, almost to the point that they can't be found at all. At the moment, the only site that ranks relatively well is the original one and a couple of secondary pages with no importance from one of the other sites. How did this happen? Does Google have something against this practice? Did they take action by themselves when they realized that I was trying to monopolize this niche, or did my competitor report me for some kind of webspam? And more importantly, what do I do now? Do I shutdown all but my original site and 301 redirect users to it from the others? Can I report my competitor for engaging in the same practice? (He fought back and now he has 3-4 sites, some of which still rank kind of OKish, also he has no idea about web development, SEO or marketing, he just crudely copies what I do and is slowly but surely starting to do better than me).

    Read the article

  • How do I properly set up and secure a production LAMP Server?

    - by Niklas
    It's very hard to find comprehensive information on this subject. Either I found short tutorials on how you perform the installation, as simple as "apt-get install apache2", or outdated tutorials. So I was hoping I could get some professional information from my fellow members of the Ubuntu community :D I have performed a normal Ubuntu Server 11.04 with LAMP, SAMBA and SSH installed through the system installation. But I'm having some trouble setting up virtual hosts and to make the system secure enough to expose the server to the web. I've somewhat followed this tutorial this far. I have 3 sites in /etc/apache2/sites-available which all looks like this except for different site names: <VirtualHost example.com> ServerAdmin webmaster@localhost ServerAlias www.edunder.se DocumentRoot /var/www/sites/example CustomLog /var/log/apache2/www.example.com-access.log combined </VirtualHost> And I have enabled them with the command a2ensite so I have symbolic links in /etc/apache2/sites-enabled. My /etc/hosts file has these lines: 127.0.0.1 localhost 127.0.1.1 Ubuntu.lan Ubuntu 127.0.0.1 localhost.localdomain localhost example.com www.example.com 127.0.0.1 localhost.localdomain localhost example2.com www.example2.com 127.0.0.1 localhost.localdomain localhost example3.com www.example3.com And I can only access one of them from the browser (I have lynx installed on the server for testing purposes) so I guess I haven't set them up properly :) How should I proceed to get a secure and proper setup? I also use MySQL and I think that this tutorial will be enough to set up SSH securely. Please help me understanding Apache configuration better since I'm new to setting up my own server (I've only run XAMPP earlier) and please advise regarding how I should setup a firewall as well :D

    Read the article

  • Do any CDN services offer multiple urls (or aliases) for your files?

    - by Jakobud
    Lets say a company has multiple commercial web properties that happen to use a lot of the same images on each site. For SEO reasons, the sites must not appear to be related to eachother in any way. This means that the sites can't all link to the same image, even though they all use the same one. Therefore, an image is uploaded to each site and served from each site separately. In order to improve maintainability and latency, lets say the company wanted to use a CDN service. What I'm wondering is, if you upload a file, like an image or something, to a CDN, is there basically one single URL that you access that image at? Or do some (or all) CDN services offer alias URLs so that you can access the same resource from multiple URLs? Example of undesirable situation: Both sites link to the same file URL Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Example of DESIRABLE situation: Both sites link to the same file via different URLs Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-alias-path/myimage.jpg"/> So in the end, there is only one single file, myimage.jpg on the CDN server, but it is accessible from multiple URLs. Is this possible with CDN services? I know this would make browsers cache the same image twice, but at least it would be better for maintainability. Only one file would ever have to be uploaded.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Google analytics - drop in traffic

    - by Andy
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • What is your most preferred method of site pagination?

    - by John Smith
    There seem to be quite a few implementations of this feature. Some sites like like Stackexchange have it laid out like this: [1][2][3][4][5] ... [954][Next] Other sites like game forums may have something like this: [1][2][3] ... [10] ... [50] ... [500] ... [954][Next] Some sites like webcomics (XKCD comes to mind) have it laid out like this: [Last][Prev][Random][Next][First] Reddit has a very simple pagination with only: [Prev][Next] Sites like Stackexchange and Google also allow you to change how many results you want per page. Personally, I have never used this feature. Is it even worth including or does it just further confuse the design with needless features? Personally, I have only ever seen the need for the webcomic style (without the random). If I need to go to a specific page (which is very, very rare) then I can just edit the address bar. Is it good design to make something more complex for rare occasions where it might make save the user some time? Is having to edit the address bar to navigate the site effectively in some circumstances bad design?

    Read the article

  • 250 k 404 & 410 errors in Webmaster Tools. Bad backlinks?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these urls: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this url. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out urls. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these urls, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them? Any help will be higly appreciated,

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Google analytics - drop in traffic

    - by user1001421
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >