Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 20/1026 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How to Boost Web Site Traffic With SEO

    SEO is important if you'd like to get your internet site listed at the very top of the search engine positions. You'll have to design and write your pages not only for the client, but also for the search site spiders and crawlers.

    Read the article

  • How to Get Your Site Noticed

    If you have a business website then you probably wonder how you can increase the viewership of the site. This article explains how you can make the site more interesting and fun.

    Read the article

  • Keeping up with Technology

    - by kennedysteve
    If you're like me, you have a hard time keeping up with all the technologies out there. The reality is there's too many new technologies (languages, methodologies,  tools, etc). One of the ways I try to keep up with everything is by using good ol' RSS feeds in conjunction with Google Reader. Google Reader is both an online aggregator of RSS feeds, and it also has a good companion app on Google Android. The nicest part of Google Reader for me is the "All Listings" view which gives me a reverse chronological view of ALL articles (mixed together) regardless of the actual RSS feed.  This way, I get to see the newest articles first. I can then choose to hide the articles I've viewed, etc. Here is a list of my RSS feeds. Admittedly, some of these are all over the spectrum. But you might find one or two interesting. .NET Rocks! RSS = http://feeds.feedburner.com/netRocksFullMp3Downloads Main Web Site = http://www.dotnetrocks.com Channel 9 RSS = http://channel9.msdn.com/Feeds/RSS Main Web Site = http://channel9.msdn.com/ CodePlex  RSS = http://www.codeplex.com/site/feeds/rss Main Web Site = http://www.codeplex.com/site/feeds/rss Connected Show Developer Podcast! RSS = http://feeds.connectedshow.com/ConnectedShow Main Web Site = http://www.ConnectedShow.com/ dnrTV RSS = http://feeds.feedburner.com/DnrtvWmv?format=xml Main Web Site = http://dnrtv.com ebookshare RSS = http://www.ebookshare.net/feed/ Main Web Site = http://www.ebookshare.net Geekswithblogs.net RSS = http://feeds.feedburner.com/geekswithblogs Main Web Site = http://geekswithblogs.net/mainfeed.aspx Gmail Blog RSS = http://feeds.feedburner.com/OfficialGmailBlog?format=xml Main Web Site = http://gmailblog.blogspot.com/ Google Mobile Blog RSS = http://feeds.feedburner.com/OfficialGoogleMobileBlog Main Web Site = http://googlemobile.blogspot.com/ Herding Code RSS = http://feeds.feedburner.com/herdingcode Main Web Site = http://herdingcode.com LearnVisualStudio.NET Videos RSS = http://www.learnvisualstudio.net/videos.rss Main Web Site = http://www.learnvisualstudio.net/ Microsoft Learning Upcoming = Microsoft Learning Upcoming Titles RSS = http://learning.microsoft.com/rss/en-US/upcomingtitles?brand=Learning Main Web Site = http://learning.microsoft.com:80/rss/en-US/upcomingtitles?brand=Learning MS On-demand Webcasts RSS = http://www.microsoft.com/communities/rss.aspx?&Title=On-Demand+Webcasts&RssTitle=Microsoft+Webcasts%3A+On-Demand+Webcasts&CMTYSvcSource=MSCOMMedia&WebNewsURL=http%3A%2F%2Fwww.microsoft.com%2Fevents%2FEventDetails.aspx&CMTYRawShape=list&Params=+%0D%0A%09~CMTYDataSvcParams%5E%0D%0A%09~arg+Name%3D'EventType'+Value%3D'OnDemandWebcast'%2F%5E%0D%0A%09~arg+Name%3D'ProviderID'+Value%3D'A6B43178-497C-4225-BA42-DF595171F04C'%2F%5E%0D%0A%09~arg+Name%3D'StartDate'+Value%3D'06%2F30%2F2006'%2F%5E%0D%0A%09~arg+Name%3D'EndDate'+Value%3D'Now%2B0'%2F%5E%0D%0A%09~%2FCMTYDataSvcParams%5E+&NumberOfItems=100 Main Web Site = http://www.microsoft.com/events/default.mspx MS Podcasts for Devs RSS = http://www.microsoft.com/events/podcasts/default.aspx?podcast=rss&audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40 Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?audience=Audience-e5381407-359f-4922-97d0-0237af790eee&pageId=x40&WT.rss_ev=f MSDN Blogs RSS = http://blogs.msdn.com/b/mainfeed.aspx?Type=BlogsOnly Main Web Site = http://blogs.msdn.com/b/ MSDN Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-b9139976-8d48-4249-9b89-ccd17891de1e.xml&pageId=x73&WT.rss_ev=f O'Reilly Deal of the Day RSS = http://feeds.feedburner.com/oreilly/ebookdealoftheday Main Web Site = http://oreilly.com O'Reilly New RSS = http://feeds.feedburner.com/oreilly/newbooks Main Web Site = http://oreilly.com/ Safari Books Online RSS = http://my.safaribooksonline.com/rss Main Web Site = http://my.safaribooksonline.com/ ScottGu's Blog RSS = http://weblogs.asp.net/scottgu/rss.aspx Main Web Site = http://weblogs.asp.net/scottgu/default.aspx SourceForge Community Blog RSS = http://sourceforge.net/blog/feed/ Main Web Site = http://sourceforge.net/blog Stack Overflow RSS = http://blog.stackoverflow.com/feed/ Main Web Site = http://blog.stackoverflow.com Stepcase Lifehack RSS = http://www.lifehack.org/feed/ Main Web Site = http://www.lifehack.org TechNet Radio RSS = http://www.microsoft.com/events/podcasts/default.aspx?topic=&audience=&view=&pageId=x73&seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&podcast=rss&type=wma Main Web Site = http://www.microsoft.com/events/podcasts/default.aspx?seriesID=Series-cc4e3db2-9212-43c5-a57b-d43fa31e6452.xml&pageId=x73&WT.rss_ev=f Wrox All New Titles RSS = http://www.wrox.com/WileyCDA/feed/RSS_WROX_ALLNEW.xml Main Web Site = http://www.wrox.com

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • Running "Rebuild Index" maintenance plan with "Online indexing"

    - by Bharanidharan
    Hi I am using Windows Server 2003 SP 2 and SQL Server 2005 Enterprise edition I am creating a "Rebuild Index" job for a particular database and I am successfully able to run the job. But when I try to enable the "Keep index online while rebuilding" option, the job does not execute successfully and throws out errors. I have attached the screenshots. Any help would be app http://img535.imageshack.us/gal.php?g=error1r.png PS: I am not able to attach the images here since i do not have 10 points yet! Thanks.

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Ranking hit after site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed Google Webmaster Tools completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic comes from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Price comparison sites and its effect on Google ranking

    - by Jivago
    I am the webmaster of a website that contains roughly 10,000 products. I would be possibly interested to index those products in a price comparison site like PriceGrabber, Nextag, Shopbot, etc. The principle of price comparison sites is great for an actual user that want to compare prices but my main concern is the effect it could have on my actual ranking on Google... Since a site like Shopbot uses a CPC model (Cost-per-click), all the links on the website are builted to track clicks (IE: http://www.shopbot.ca/r.html?i=3&catc=2&refshop=5706&refshopcodeid=42587349), it uses redirection, no direct links (So no direct backlinking). In your opinion and/or experience, is this a smart, business wise, seo wise move or not? THANKS!

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • How do you prefer to handle image spriting in your web projects?

    - by Macy Abbey
    It seems like these days it is pretty much mandatory for web applications to sprite images if they want many images on their site AND a fast load time. (Spriting is the process of combining all images referenced from a style sheet into one/few image(s) with each reference containing a different background position.) I was wondering what method of implementing sprites you all prefer in your web applications, given that we are referring to non-dynamic images which are included/designed by the programming team and not images which are dynamically uploaded by a third party. 1. Add new images to an existing sprite by hand, create new css reference by hand. 2. Generate a sprite server-side from your css files which all reference single images set to be background images of an html element that is the same size of the image you are spriting once per build and update all css references programmatically. 3. Use a sprite generating program to generate a sprite image for you once per release and hand insert the new css class / image into your project. 4. Other methods? I prefer two as it requires very little hand-coding and image editing.

    Read the article

  • Ranking hit after WP site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed WMT completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic come from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings. Any help would be greatly appreciated. Thanks!

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • Advice on software infrastructure for a FLOSS bounty site

    - by michaeljt
    I am planning to set up a simple web site where people can offer bounties for work on FLOSS projects. Unfortunately I have no experience at web development (I am a C/C++ developer), so I was hoping someone might be able to suggest out-of-the-box packages (preferably Debian ones) I could use to build the site from. My idea of how the site would work is to keep things as simple as possible. The person proposing a bounty would enter a description with relevant links (particularly to a bugtracker entry with the project the work is to be done on, where the real discussion and work would take place) and information and place an initial contribution. Other people would be able to add (donate, not pledge) contributions, but any discussion would take place on the project's bugtracker. I am also planning to run a mailing list rather than a forum (at least initially), so that is not a requirement. Paypal seems to me to be the handiest payment mechanism. So overall what I need is probably a simple interface with Paypal integration and a simple database backend. I hope this is the right place for my question, if not I would be grateful for pointers to somewhere better. And of course, this is purely about the technical side, though I am more than happy to discuss other aspects of the project elsewhere.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >