Search Results

Search found 47996 results on 1920 pages for 'google apps script'.

Page 319/1920 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Why are new pages not being indexed and old pages stay in the index?

    - by ZakGottlieb
    I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?

    Read the article

  • A drop in SERP after following webmaster guidelines [on hold]

    - by digiwig
    So here's a puzzle for all you SEO gurus out there. I recently launched my own site. I had target keywords which were ranking very well for about 1 month, within the top five and even appearing in first place. In an attempt to maintain good positioning, I followed guidelines by adding robots.txt, an xml sitemap redirecting non-www to www redirecting index.php to root domain adding htaccess 301 redirect for old pages I added rich snippets created a google+ account, verified my picture to appear, I went through each of the webmaster issues with duplicate titles and meta descriptions and improved header tag document outlines i even created a few more blog posts to keep the content freshing and moving. So now my website appears on page 2 with my target keywords - and all because I followed the guidelines. What is happening? I see competitors with stagnant content superglued to position 1.

    Read the article

  • Google active la réponse et l'update partielle dans son Data protocol, pour faciliter la mise à jour

    Google active la réponse partielle et l'update partielle dans son Data protocol, pour faciliter la mise à jour de ses APIs Dans le but de booster la rapidité de ses APIs, Google Data Protocol (qui permet aux développeurs d'écrire des applications liées avec les données contenues dans les produits Google) s'est vu attribué deux nouvelles fonctionnalités cette semaine, au stade expérimental : la réponse partielle, et l'update partielle. Conjointement, ces deux fonctions "peuvent significativement réduire les ressources consommées par le réseau, la mémoire et le CPU" dont on a besoin pour travailler avec les APIs de Google. Pour expliquer le rôle de la réponse partielle, l'équipe du Google Data Protocol donne ...

    Read the article

  • how can i track visits to ALL of the subpages of my website COMBINED TOGETHER?

    - by realcheesypizza
    Right now im using statcounter and google analytics. They are great. But my counts are currently separated. Ex: website.com = 1000 visits a day, website.com/about = 50 visits a day, website.com/privacy = 10 visits a day, etc.. How can have a combined count of all of my sub-pages? (mainpage + about page + about 100 other sub-pages ) I can of course manually add them all together, but that's time consuming because there are many pages. I tried placing a separate tracking code in a php include that sits in each of the sub-pages, but it doesnt seem to be working. It seems to require a single URL to create it, which it then only counts the visits from the one url, rather than ALL of them. Ex: website.com) Any help would be appreciated. Hopefully im just missing something very obvious. Thank you!

    Read the article

  • How to Automate Checking for Stolen Content?

    - by Hisoka
    So I know about tools like Copyscape and Google Alerts.. great tools, but it's quite tedious for me to copy and paste an URL or phrase for every one of my pages in my sites. Is there any tool out there that monitors your website and emails you or alerts you whenever someone has stolen content from your site? The only service I know is CopySentry and honestly, it's too expensive for me since I got thousands of pages I want to monitor... Anyone else have this problem? or is it just me? Thanks for any help.

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • Analytics - Total events divided by number of unique pages?

    - by GeekyAndUnique
    I am using Google Analytics events to track keywords on my articles - not necessarily the best system I know but there are too many for variables I can't easily change it right now - and I would like to be able to see how popular each keyword is by dividing the number of page views with a keyword by the number of unique pages. Is there a/what is the best way of doing this? EDIT FOR CLARITY I currently have a system set up where every time somebody loads an article an event is fired for each of the tags/keywords used, with the keyword being the label. I can currently view my view count for each of the keywords by looking at the total events for each label, however I would like to be able to see which keywords are the most popular by dividing the number of times the event has been fired by the the number of different pages it has been fired from.

    Read the article

  • Which MIME type to compress? and what If I omit the `type` attribute from the HTML?

    - by rockyraw
    Per my request, my webhost had turned mod_deflate ON. In my Cpanel I now have an "Optimize Website" button. Inside that menu I could either choose: "Compress all content" or "Compress the specified MIME types" with the following default MIME types: "text/html text/plain text/xml" Which option should I choose and why? If I choose option 2, which types should I add (is there a recommended list with the exact way they should be written)? According to Google recommendations, I have omitted the type="text/css" attributes from all CSS references, as well as the type="text/javascript" attributes from all script references. Would this hinder the "gzipping" process?

    Read the article

  • Need to add 30K new pages to a 10K page website - troubles ahead? (SEO)

    - by Jurga
    We have a situation with a website where we plan to add a huge amount of new pages. The domain is over 10 years old, approximately 10 thousand indexed pages, and the planned addition is approx. 30K new pages. Any idea how we should go about it? Must we schedule a gradual data release? Have you heard of any industry standards as to how many new pages per day / week / month should be added in order to appear natural and not get in trouble with Google? I.e. should we plan a bi-weekly addition of 5K?

    Read the article

  • Google quitterait la Chine dès le 10 avril et l'annoncerait lundi, selon une source proche du dossie

    Mise à jour du 19 03 2010 par Katleen Google quitterait la Chine dès le 10 avril et l'annoncerait lundi, selon une source proche du dossier Une nouvelle source anonyme vient d'évoquer le possible départ de Google avec, cette fois, une date précise de cessation d'activités. D'après un collaborateur de la firme de Mountaiw View, "Google quitterait la Chine le 10 avril, mais Google n'a pas confirmé l'information pour le moment". Ce témoignage a été reccueilli par CBN (China Business News), et annonce une prise de parole officielle de Google pour expliquer l'organisation de ce départ dès lundi. A suivre donc. Mise ...

    Read the article

  • Google Docs : fin de l'export de documents aux « anciens » formats d'Office .doc .xls et .ppt, mais leur support sera toujours assuré

    Google Docs for Business : fin de l'export de documents aux « anciens » formats d'Office .doc, .xls, et .ppt, mais leur support sera toujours assuré par la suite Pas de nouveauté cette semaine pour les Google Apps for Business, mais une annonce qui intéressera les utilisateurs d'anciennes version de Microsoft Office (antérieures à 2007). La suite hébergée de Google ne permettra plus d'exporter des documents aux « anciens » (sic) formats que sont .doc, .xls, et .ppt. Un changement qui prendra effet dès le 1er octobre. Google précise cependant bien que Google Apps for Business continuera de supporter ces formats et qu'il sera toujours possible d'uploader ce type de doc...

    Read the article

  • Cannot use await in Portable Class Library for Win 8 and Win Phone 8

    - by Harry Len
    I'm attempting to create a Portable Class Library in Visual Studio 2012 to be used for a Windows 8 Store app and a Windows Phone 8 app. I'm getting the following error: 'await' requires that the type 'Windows.Foundation.IAsyncOperation' have a suitable GetAwaiter method. Are you missing a using directive for 'System'? At this line of code: StorageFolder guidesInstallFolder = await Package.Current.InstalledLocation.GetFolderAsync(guidesFolder); My Portable Class Library is targeted at .NET Framework 4.5, Windows Phone 8 and .NET for Windows Store apps. I don't get this error for this line of code in a pure Windows Phone 8 project, and I don't get it in a Windows Store app either so I don't understand why it won't work in my PCL. The GetAwaiter is an extension method in the class WindowsRuntimeSystemExtensions which is in System.Runtime.WindowsRuntime.dll. Using the Object Browser I can see this dll is available in the .NET for Windows Store apps component set and in the Windows Phone 8 component set but not in the .NET Portable Subset. I just don't understand why it wouldn't be in the Portable Subset if it's available in both my targeted platforms.

    Read the article

  • How many custom tabs can I add to a Facebook page?

    - by Maxi Ferreira
    I'm building a web application to create custom tabs and add them to the user's Facebook fanpages. I know how the process of "installing" FB apps into FB pages so they show up as Page Tabs works, but the problem is the client wants to allow the user to create unlimited Page Tabs for a single FB Page. So I have basically two questions. 1 - Can I resue a single FB App to be included into the same page several times? If so, is there a way to know what is the "id" of that Page Tab? So, if I have my FB App to look for the Tab content in http://www.mywebapp.com/tab/, I know I get a signed_request with the App ID and the Page ID, but if that same App is installed several times into the same Page, I don't know what's the Tab the user have cliked on. I know it's a little big messy, and I don't think there's a way to do this. So my next question is probably more accurate. 2 - Is there a limit on how many Tabs can I add to a single Facebook page? This way, if there's a limit of, say, 12 Tabs, I can create 12 FB Tab Apps, store the ID's and then I know which Tab of which Page the user is currently viewing. Thanks in advice! Maxi

    Read the article

  • openVPN GUI does not run error about error opening registry for reading HKLM\SOFTWARE\OpenVPN

    - by Coder
    I'm trying to run OpenVPN as a portable application and to that effect i have installed it on a Windows 7 machine, copied the files to another windows 7 machine and manually restored the registry settings using a .reg file. Whenever i try to run open vpn GUI i get the following error error opening registry for reading HKLM\SOFTWARE\OpenVPN I have verified that the key mentioned is indeed in the registry at the correct location with the correct values yet the GUI still complains. I have tried running the gui as an administrator (i'm logged in as an administrator) and also the compatibility modes but none helped. I have also tried openVPN portable "OpenVPNPortable_1.6.6.paf.exe" and it has the same problem. Can anybody help me with this issue?

    Read the article

  • `# probe: true` in /etc/rc.d/init.d/* files on a RedHat system

    - by Chen Levy
    Some files (e.g. nfs, nfslock, bind) in my /etc/rc.d/init.d/ directory have in their comment header a line such as: # probe: true I found that those particular scripts has the probe verb i.e.: service nfs probe But this is due to the fact that the mentioned scripts has code that deals with the probe verb. I find no mention of the # probe: true notation in chkconfig man page, nor in any related man pages. Googleing for it also didn't help. Is there a real significance for that line, or is it pure documentation?

    Read the article

  • NSClient++: external script with optional arguments

    - by syneticon-dj
    I am trying to define an external script which would take optional arguments in NSClient++ 0.4.1 on Windows. Following the nsclient-full.ini example code I have defined mycheck=cmd /C echo C:\mydir\myscript.ps1 %ARGS% | powershell.exe -command - which simply yields the string %ARGS% passed as the only argument to myscript.ps1, no matter what I specify in my call through NRPE (using Nagios' check_nrpe if that matters). I then tried to rewrite the definition to mycheck=cmd /C echo C:\mydir\myscript.ps1 $ARG1$ $ARG2$ | powershell.exe -command - (myscript.ps1 would take up to two arguments), which does help a bit. At least, if two arguments are provided, I can fetch them via the args[] array. The trouble starts when the call has less than two arguments - in this case the literal strings $ARG2 and $ARG1$ are passed through as arguments. Handling this case in the code of myscript.ps1 makes the whole argument processing routine ugly at best. Is there a sane way of defining optional parameters to an external script which would not pass NSClient's variable names if no parameter has been specified?

    Read the article

  • GhettoVCB.sh log is wrong

    - by Michael
    2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: ============================== ghettoVCB LOG START ============================== 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nfs_storage_backup/vm1 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 Failed to clone disk : The file already exists (39). Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... 2010-02-25 16:04:16 -- info: Removing snapshot from VM1 ... Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... How can I fix this issue, the backup is working, but the log shows something like 2 back-up's in the exact time?

    Read the article

  • Adding my podcast to my Facebook fan page

    - by Donald Burr
    I've set up a Facebook fan page for my podcast, Otaku no Podcast. I'd love to add a Flash based player to the fan page that can play the latest episode of my podcast. Or, at the very least, a link to the latest episode on my website (which has its own Flash-based audio player). My podcast's website of course exports a valid RSS feed. I've tried several different podcast player/RSS feed display applications including Podcast Pickle (which has a facebook app), but none of them appear to work and/or are maintained any more. Podcast Pickle used to work for me a long time ago, but is no longer working for me. Any ideas?

    Read the article

  • Running nph-script.cgi keeps outputting Server details at the end

    - by wgewweg
    I am running a nph-script.cgi on my server. The server keeps adding HTTP/1.1 200 OK Date: Thu, 05 Nov 2009 02:28:53 GMT Server: Apache/2.2.8 (Ubuntu) PHP/5.2.8-1hardy~ppa1 with Suhosin-Patch mod_perl/2.0.3 Perl/v5.8.8 Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain X-Pad: avoid browser bug At the bottom of each page loaded via the .cgi script. why is this the case? How do I remove this annoying message that is appended to all pages ?

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • CentOS Backup BASH Script

    - by user1062058
    I just wrote this script for backing up everything into a tar.gz file. Does it look okay? How can I get the tar file to transfer itself over to another server after executing? FTP from itself? I'm going to put this script into a weekly cron. #!/bin/bash rm ~/backup.tar.gz #removes old backup BACKUP_DIRS=$HOME #$HOME is builtin, it goes to /home/ and all child dirs tar -cvzf backup.tar.gz $BACKUP_DIRS # run tar -zxvf to extract backup.tar.gz Thanks.

    Read the article

  • convert bat to sh

    - by Cris
    I am totally new to scripting in linux...so i want to port some simple window bat files to ubuntu. First file is easy setenv.bat set ANT_HOME=c:\ant\apache-ant-1.7.1 set JAVA_HOME=c:\java in linux i did this and it seems ok setenv.sh #!/bin/bash JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.24/ ANT_HOME=/usr/share/ant echo $JAVA_HOME echo $ANT_HOME but now i want to port this bat file: startserver.bat call ../config/setenv call %ANT_HOME%/bin/ant -f ../config/common.xml start_db call %ANT_HOME%/bin/ant -f ../config/common.xml start_server pause but i have no clue how can i do this in linux call ../config/setenv thank you for any help , direction given.

    Read the article

  • How to get my external IP address (over NAT) from the Windows command-line?

    - by Diogo Rocha
    The Windows "ipconfig" command can only show me the parameters from the Ethernet interfaces from my machine (even with the "ipconfig /all" argument). It can show detailed information about the interface, but it will never show me my external IP address over a NAT network. However, there are several websites, such as "What is my IP address" that can get and show my external IP address. So I'm wondering, is possible to get this value externally? Should I expect that there is some way to get this information from a command line at my local machine... I need to get this value to log on an application that I'm doing with VBScript. There is some way to do this, from a "cmd" on Windows?

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >