Search Results

Search found 28303 results on 1133 pages for 'multi site'.

Page 139/1133 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Activating active PuTTY window in MTPuTTY with AutoHotkey script doesn't work

    - by Piotr Dobrogost
    I'm using Multi-Tabbed PuTTY and I wrote AutoHotKey script to rerun the command which was run as the last one. However the active PuTTY window (inside MTPuTTY) does not get activated thus sending keys has no effect. CTRL+` is a hotkey to Switch between the application and active PuTTY window. How to fix this? WinWait, MTPuTTY (Multi-Tabbed PuTTY), IfWinNotActive, MTPuTTY (Multi-Tabbed PuTTY), , WinActivate, MTPuTTY (Multi-Tabbed PuTTY), WinWaitActive, MTPuTTY (Multi-Tabbed PuTTY), Send, {CTRLDOWN}`{CTRLUP} Send, {UP}{ENTER}

    Read the article

  • "Rien n'est sécurisé sur le site Web de l'Hadopi" déclare Eric Walter, lors d'un débat sur les cyberguerriers

    "Rien n'est sécurisé sur le site Web de l'Hadopi" déclare Eric Walter, le secrétaire général était l'invité d'un débat sur les cyberguerriers Ce 8 février 2011 avait lieu dans un endroit huppé de la capitale un débat organisé par le Cercle, un réseau de 500 professionnels de la sécurité de l'information. Autour de la table, étaient réunis Olivier Laurelli, aka Bluetouff, blogueur spécialisé dans les problématiques liées à la sécurité et aux libertés individuelles ; Pierre Zanger, psychiatre et psychanalyste ; et Eric Walter, secrétaire général de l'Hadopi. Ce dernier, habituellement très contesté dans ce genre d'exercices, fut accueilli par ces mots de la part du "Monsieur Loyal" de la soirée :"Je n'ai pas ...

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • How can I get my dynamic site search results content indexed by Google?

    - by Kris
    I have a site that is simply a search box to search a cloud-hosted database of .tiff images, and then all of my content can only be accessed by entering a search term. So for example, you're on the home page www.example.com and you type in "search" to the box and hit submit. Then it takes you to www.example.com/?q=search, which is a page of all my .tiff images with "search" in the description. How can I get a page like www.example.com/?q=search indexed, WITHOUT making a humungous list of search terms that people might type in?? I know about mod_rewrite, but it seems like for that you need to know ahead of time which URLs you'll need to convert, which I don't. All of these pages will be dynamically user-generated by typing into the search field. Please help!

    Read the article

  • Comment soulager les informaticiens des questions des débutants ? Google lance un site pour les nuls en informatique

    L'informatique pour les nuls selon Google, quelles autres solutions pour soulager les développeurs des questions des débutants ? Google vient de lancer un site destiné aux adultes novices en informatique : ceux qui ne connaissent pas Facebook, qui ne savent pas Tweeter, et qui n'ont pas encore compris qu'il était possible d'attacher des pièces jointes (photos, vidéos), à leurs e-mails. Sur le domaine Teachparentstech.org (apprendre la technologie aux parents), des tutoriaux multimédias montrent comment réaliser les tâches les plus simples : changer son fond d'écran, créer un nouveau dossier, etc. ; mais aussi d'autres plus "compliquées" : trouver une adresse postale sur la Toile, y consulter un plan, définir un mot de passe sécurisé, e...

    Read the article

  • Ajax site not being crawled - have escaped fragment, what's wrong? [closed]

    - by Harry
    My site is anonkun.com. You can see that it's "ajax" and doesn't load much HTML. Here are some example pages: http://anonkun.com http://anonkun.com/?_escaped_fragment_= http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5 http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5?_escaped_fragment_= As you can see the original page has the meta fragment tag and the escaped fragment versions loads static html. Why am I not getting crawled? http://cl.ly/image/2n30212q0K2W Webmaster tools show that pages are being seen as duplicate and fetch as google show me the ajax version of the source not the static escaped fragment version. What's wrong and how do I make this work? Thanks.

    Read the article

  • Why are the tags on my site using wordpress being indexed instead of the page?

    - by Bernard
    I can't figure out why my tags are being indexed by google and not my actual posts. So in google, my posts are showing up as mysite.com/tags/post and I of course I want it to look like mysite.com/category/actualpost. Any ideas what could be wrong? My domain is 3 years old and I just started a new focus of an existing site. I can't figure this out! There is no duplicate content, I have a sitemap submitted to webmaster tools and robots.txt...I have everything I need. This is the first time something like this has happened to me. Let me know if anyone has any ideas.

    Read the article

  • Cannot install gnome extensions from gnome site. No switch appearing in firefox or chrome

    - by Andrew James Adams
    I have installed ubuntu 12.04, and installed gnome3 on my system. I am attempting to download the user theme extensions from extensions.gnome.org, but I can't see this "switch" everyone's talking about. I've tried both chromium and firefox browsers on the site. I found a similar subject here at askUbuntu. I followed the directions but I got a warning about gnome common dependencies. I installed gnome-extensions-common without an error but I still cannot install user-themes, and I can't find the mysterious "switch". Any ideas? Thanks in advance.

    Read the article

  • What are your advice, methods, or practices to take out the most from a day on-site at a customer?

    - by Stephane
    We just deployed a large software that affects the way the user work-day looks like in many aspects. It changes a lot of things in the way they interact between eachothers. The developers of the team are taking rounds and spending one day at the customer's site to understand better what is a typical day for our user, and the learning process they go through. In this context, what How would you approach that day, what kind of questions do you ask, how do you observe the user. Are there defined practices that exist? Also what wouldn't you do? Did you have some bad experience out of this?

    Read the article

  • What good Social Networking Site solutions there are? [closed]

    - by ZetsubouWebmaster
    What good and free Social Networking Site solutions there are? I tried many options but most of them are either too complicated, too simple, or just do not work... I tried: Dolphin, DZOIC-Handshakes, elgg, Oxwall, SocialEngine, and some plugins for wp and other CMS. I don't need much, just: groups, chats, forums, profiles, PM, photos, pages, comments, search, statistics. Most of which included in pretty much every CMS out there, but not all.. So, what good solutions there are? Also I don't mind paying some money (I guess no more then $200), but I'd prefer if it was a free open source engine. Of course it should be PHP+MySQL based.

    Read the article

  • how to select categories for user generated content site?

    - by Frederik Creemers
    On the site I'm building, users can create tutorials. I want the users to be able to create tutorials on as many subjects as possible, but still have some preset categories. What's the best way to select these categories? The reason I don't just let users add keywords, and use these for categorization, is because users gain experience points in a certain subject when their tutorial is liked by someone, and in a similar way the Stack Exchange network does, create communities around these subjects. I will give visiters the possibility to suggest new categories. here are the categories that I'm thinking of at the moment: health gardening cooking technology science & math music visual art

    Read the article

  • Which tool to create a sitemap to plan a future site?

    - by peterpurzelbaum
    I'd like to create a sitemap to plan a future site, and I'm looking for a tool to do it. I'd like to create a list of all articles first. Then a hierachy. Then I'd like to put the articles on several places in the hierachy. I should be able to put one article at different places. I'd like to have the ability to mark the articles in different colors whether they should come into the first version of the website or later.

    Read the article

  • Facebook dépasse Google et devient le site le plus populaire aux états unis, selon les dernières sta

    Facebook.com dépasse Google.com sur le nombre de visites Quand les réseaux sociaux s'emparent du Web Une récente analyse effectuée depuis Mars 2009 montre que le site communautaire Facebook dépasse de 0.04% le géant des moteurs de recherche. [IMG]http://www.livesphere.fr/images/dvp/Facebook-Google.png[/IMG] La perpétuelle évolution des réseaux sociaux montre l'intérêt qu'on les internautes pour ces derniers et permet d'entrevoir concrètement l'importance de ces réseaux sur la toile. Tout autant, il apparait clairement que le nombre de visites est le plus important lors des vacances scolaires ou des évènements comme Noël,... Quand le web communauta...

    Read the article

  • How to search drupal site from the new Unity lense?

    - by Ognjen
    I'm creating a simple Unity lense for my college site which is based on Drupal, but I don't know how to adapt this command for Drupal API. Please help, it's python. We now create our query url, using the Wikipedia opensearch API url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) I'm using template to write lense following Wikipedia example http://developer.ubuntu.com/2012/04/how-to-create-a-wikipedia-unity-lens-for-ubuntu/. I don't know python but Im familiar with C. This Drupal API calling is the only problem I have to successfully develop a lense. Please help!

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • How can I search a Drupal site with the new Unity lens?

    - by Ognjen
    I'm creating a simple Unity lense for my college site which is based on Drupal, but I don't know how to adapt this command for Drupal API. Please help, it's python. We now create our query url, using the Wikipedia opensearch API url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) I'm using template to write lense following Wikipedia example http://developer.ubuntu.com/2012/04/how-to-create-a-wikipedia-unity-lens-for-ubuntu/. I don't know python but Im familiar with C. This Drupal API calling is the only problem I have to successfully develop a lense. Please help!

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How can I stop a bot attack on my site?

    - by tnorthcutt
    I have a site (built with wordpress) that is currently under a bot attack (as best I can tell). A file is being requested over and over, and the referrer is (almost every time) turkyoutube.org/player/player.swf. The file being requested is deep within my theme files, and is always followed by "?v=" and a long string (i.e. r.php?v=Wby02FlVyms&title=izlesen.tk_Wby02FlVyms&toke). I've tried setting an .htaccess rule for that referrer, which seems to work, except that now my 404 page is being loaded over and over, which is still using lots of bandwidth. Is there a way to create an .htaccess rule that requires no bandwidth usage on my part? I also tried creating a robots.txt file, but the attack seems to be ignoring that. #This is the relevant part of the .htaccess file: RewriteCond %{HTTP_REFERER} turkyoutube\.org [NC] RewriteRule .* - [F]

    Read the article

  • How do you break down a new project with an existing mega PHP site?

    - by Caveatrob
    I've got to dive into a very large PHP site and have my first client meeting today. All they gave me so far was the URL. How do you guys go about gathering/structuring/documenting and preparing for a new project in a PHP environment? What things do you ask for up front? PS - I know there are other general questions about this but I want a PHP-flavored one, including tools (even if universal) and approaches. Thanks!! I'm excited but also scared.

    Read the article

  • How can I create a dynamic site that is still search-bot friendly?

    - by zuko
    If I want to have a slide effect between pages. You click a link, it is loaded off to the side and then slides in (pushing the old page off the other side). I can imagine using jQuery to do the PHP and the effects... but how do I do something like this that gracefully degrades for users without Javascript, including bots? Possibly more problematic: what if I wanted to have a sort of mural background across the site, perhaps with a parallax scrolling effect, and sliding to other pages reveals more of the, possibly giant image? Again, I can imagine how to do this with lots of fancy jQuery and PHP but it would heavily rely on those. How can I gracefully degrade in a situation like that? Any pointers, articles or books would be greatly appreciated. I keep trying to search for answers but I just get a lot of "theory"-based, unhelpful blogs.

    Read the article

  • New: Oracle CRM On Demand Release 19 Partner Readiness web site!

    - by Richard Lefebvre
    We are pleased to introduce you the Oracle CRM On Demand Release 19 Partner Readiness page, a dedicated web site, designed as part of the Release Readiness Program for Partners to provide the training and resources necessary for YOU to successfully position and implement the new Oracle CRM On Demand 19 release. Organized around 3 areas (Immersion Training, Transfer of Information and Collaterals & Other Assets), it consists of 19 short trainings and 4 documents helping you to deliver successfully your CoD Release 19 projects.  Visit the CRM on Demand Release 19 Partner Readiness page here!

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Is there a White / Blank Canvas E-Commerce Platform to Integrate into Existing Site? [closed]

    - by beta208
    Possible Duplicate: Which Ecommerce Script Should I Use? Our website is built we're interested in adding a Store to the site. Essentially, there is a global header, and a global footer, and in between is a white expandable div. We'd like our store to fit between the header and footer (and preferably be 960px wide). Do you know of any store platform built to live between the header/footer for situations like this? We really want a full store, not just paypal buy buttons. We'd like it to have a shop backend (CMS-like) with full tracking, etc. Can be paid or unpaid, and preferably hosted by us, but either might be applicable (if iframe or alternative works securely?). This would need to feature over 100 items. If authorize.net is supported that is a plus.

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Un grand quotidien américain bloque l'accès à son site pour vendre son application iPad, censure ou marketing ?

    Un grand quotidien américain bloque l'accès à son site pour favoriser son application iPad Censure ou marketing ? Alors que chaque jour ressorts des usages inédits des navigateurs modernes, rendus possibles par leurs capacités et performances sans précédent, le plus ancien journal des États-Unis vient d'interdire l'accès à sa version électronique au navigateur Safari de l'iPad. Le New York Post force les possesseurs de la tablette populaire d'Apple à télécharger son application payante s'ils veulent consulter son contenu pourtant disponible gratuitement sur le Web, en affichant une landing-page sans issue en remplacement de toutes ses pages. [IMG]http://idelways.dev...

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >