Search Results

Search found 24378 results on 976 pages for 'pinned site'.

Page 19/976 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Any good site that teaches C++?

    - by Shinmaru
    I am searching for any good site that teaches C++, that can explain most to all things about it(general) and has a decent active community. About Me: I am new to programming(knows nothing of it, so please bare with me), I have only learned(to a very basic form) LUA scripting Language, so yes I am your complete newbie. I got interested in programming, from scripting in LUA, so you can say it was my small stepping stone, One would basically take a course in college, but not everyone is well funded for that and I can't buy books for the same said reason(yes I'm somewhat poor, only money for essentials and bills), I'm not trying to get sympathy, just stating my conditions. and Please, something in between the lines of 'Programming for Dummies'(I'm not the brightest crayon in the box), I know this will not be easy, but your help will be most appreciated. I would learn for either Windows and/or Linux/Unix. I use both. Site(s) I know: Cplusplus.org(quite inactive and tutorials are a little for the programming savvy.

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • How to restore jump list for Windows Explorer pinned taskbar icon?

    - by Dario Solera
    I pinned Windows Explorer's icon to my taskbar months ago, and all worked fine, including jump lists. Note: I changed the default Explorer's folder using the usual trick of changing the GUID in the shortcut. Now, all of a sudden, the jump list is empty, and dropping folders on the pinned icon does not produce any effect, nor recently-opened folders show up in the list (although in the taskbar options I made sure to have it remember recently open items). How can I restore the original functionality, given that all other pinned applications jump lists work correctly? I'm running Windows 7 Ultimate x64, updated with the latest patches. The OS is in US English and my locale is Italian-Italy.

    Read the article

  • How do I enable LastPass (or any other browser add-on) for a pinned site in Internet Exploer 9, 10, or 11?

    - by rob
    I use LastPass for several websites, and in all my browsers I can enable the LastPass toolbar to simplify logging into a website: I also have some pinned sites in IE11. When I open a pinned site all the third-party toolbars, including LastPass, seem to be disabled and hidden from the options: As you can see in the following screenshot, LastPass is installed and enabled for both 32-bit and 64-bit IE: I've observed the same problem in IE9 and IE10 on Windows 7 and in IE11 on Windows 8.1. In all cases, I'm running the default IE on 64-bit Windows. How do I enable the LastPass toolbar for a pinned site?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Creating a Wiki site in SharePoint 2010

    - by Ben
    I have a SharePoint 2010 site set up and would like to add a Wiki site to it. Normally, to add a subsite to a parent site you would click "Site Actions" - "New Site" and choose the site type (e.g. Team Site) and it would work fine. In the case of adding a Wiki site, I choose "Enterprise Wiki", and get "Error An Unexpected Error has occured Correlation Id:...". So i took the CorrelationId and checked in the ULS files to see what the error is, and the first bad sign seems to be: SharePoint Foundation Feature Infrastructure " The element of type 'ContentTypeBinding' for feature 'EnterpriseWiki' (id: 76d688ad-c16e-4cec-9b71-7b7f0d79b9cd) threw an exception during activation: Specified argument was out of the range of valid values. Parameter name: Content type not found (Id: '0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39004C1F8B46085B4D22B1CDC3DE08CFFB9C')." There are more errors in the log, but this seems to be the first one. Any ideas on whats going on? Thanks

    Read the article

  • Need full news article on my site

    - by Kamlesh Ghate
    I'm working on news site (developed in PHP) where I'm displaying news from different feeds. I'm parsing the XML and storing the content in my database. But when a reader comes to the site and clicks on any title to read the full article it redirects to the site from where I've taken that feed. I want full article so that user can read that on my site without leaving the site. How can I get full article to display that on my site.

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • What is a proper way to store site-level global variables in a SharePoint site?

    - by ccomet
    One thing that has driven me nuts about SharePoint2007 is the apparent inability to have defineable settings that apply specifically to a site or site collection itself, and not the content. I mean, you have some pre-defined settings like the Site Logo, the Site Name, and various other things, but there doesn't appear to be anywhere to add new kinds of settings. The application I am working on needs to be able to create multiple kinds of "project site collections" that all follow a basic template, but have certain additional settings that apply specifically to that site collection and that one alone. In addition to the standard site name we also need to define the Project Number, the Project Name, and the Client Name. And given the requests of some of our clients, we also reach a point where we have to have configurable settings that alter how some of the workflows work, like whether files are marked with Letters or Numbers. Our current solution, which I'm hesitant about, has been to store an XML file on the SharePoint server. This file contains one node for each site collection, identified by the URL of the root site. Inside the node are all of the elements that need to be defined for that site collection. When we need them, we have to access the XML file (which will always require SPSecurity.RunWithElevatedPrivileges to access files right on the server) every time to load it and retrieve the data. There are a lot of automated processes which will have to do this, and I'm hesitant about the stability of this method when we reach hundreds of sites with thousands of files running tens of thousands of workflows, all wanting to access this file. Maybe they're unfounded worries, but I'd rather worry than risk everything breaking in a couple years. I was looking into the SPWeb object and found the AllProperties hashtable. It looks like just the kind of thing which might work, but I don't know how safe it is to be modifying this. I read through both MSDN and the WSS SDK but found nothing that clarified on adding completely new properties into AllProperties. Is it safe to use AllProperties for this kind of thing? Or is there yet another feature that I am missing, which could handle the concept of global variables at the site collection or site scope?

    Read the article

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • Does sitewide html refactoring affect Google traffic?

    - by Name
    Good morning, I have recently made a big structural change on my site and the very next day the number of Google impressions went from 75.000 to 3.000, with a proportional drop of traffic from searches. No URLs were changed, neither were the page titles or descriptions. Everything is exactly the same, but different looking, except that it does barely appear on Google anymore. Anybody has a clue to why?

    Read the article

  • Abnormal alexa ranking score

    - by SteenhouwerD
    I have 2 websites of ecommerce in France and for these websites I have a strange behaviour of the alexa results. Here are some statistics about the websites : Unique Visits January 2012 Website A : 158,828 Website B : 58,867 Number of Search Results google Website A : 5,100 Website B : 56,000 Links to my site Website A : 3,120 Website B : 2,180 ALEXA Score Website A : 405,804 Website B : 278,944 How does it come that website B with 1/3 of the visitors of website A have a much better Alexa Score ( x2 ) then website A?

    Read the article

  • Only one sharepoint site is not reachable

    - by Kabir Rao
    We are facing a strange issue since morning...only one sharepoint site is not accessible. In our sharepoint 2003 server there are several sites configured for separate projects. Several IIS Reset has been done, everyother site is accesible except one. When I try to map the site as network drive, i am able to access the contents of site. Please help asap. Thanks upfront.

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Copy CakePHP site under IIS webroot?

    - by dan giz
    I've got a CakePHP application that uses a MSSql server running on windows server 2008r2 enterprise using IIS 7.5 My application is the only website running and is installed into wwwroot with cakephp installed to c:\inetpub (one level up from the site) I want to copy this site so i can have a development version to work on, without conflicting with the live site. how do i go about doing this? i'm confused, since previous set ups i've seen have a different folder in wwwroot for the site itself.

    Read the article

  • Why is my site not on Google? [closed]

    - by RD
    I wanted to post a link here, but some people might see that as advertising. So, instead I'm going to phrase my question like this: What can I do, to make sure my site appears on Google? I have already done the following: Submitted my sitemap Added my site at www.google.com/addurl Added Analytics to my site Checked in the webmaster tools if there are crawlers errors But still, after about three or four days, the crawler hasn't crawled my site. What am I missing?

    Read the article

  • Using local DNS and public DNS during site development

    - by ChrisFM
    I'm a web designer and I often develop new sites for existing businesses. Sometimes I find it useful to point my DNS address (for my personal computer) to the development servers local DNS (instead of Googles 8.8.8.8 or the default isp's address). I like this as it let's me see that the new site's internal links, etc, operate before switching over the authorative DNS from the old site. However outgoing links (Like a Google map) would not route with my local dns. The first thing I thought was, oh I just need to fill out my DNS directory with Google and every other domain I might need to link to, wait... that sounds insane. I was wondering if anyone could give me insight into a better way or more functional way to get use local DNS addresses when they're available and public supplied DNS address when they are not? Kinda like a DNS to 'roll-over'? Or maybe a completely different approach to development all together. Thanks in advance for your insight. All the best!

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >