Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 398/1180 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • WordPress is now nicely supported on SQL Server (and SQL Azure for that matter)

    - by Eric Nelson
    WordPress is enormously popular for blogs and full websites thanks to an awesome eco system which has built up around it, the simplicity (relatively) of getting it up and running plus the flexibility to “bend it” in all sorts of directions. When I say bend, check out the following which are all WordPress sites My “back up blog” http://iupdateable.wordpress.com/  My groups “odd site” :) http://ubelly.com My favourite “cheap games” site http://www.frugalgaming.co.uk/  WordPress users typically run their sites on Linux and MySQL, although PHP (the language in which WordPress is written) can be happily run on Windows. Both fine technologies in their own right, but for me (and probably a fair few others) I would love to use WordPress but with the technologies I know best (aka Windows, IIS and SQL Server). However, that has proven to be actually rather tricky in practice to get working – until now. Earlier last month OmniTI released a patch for WordPress which provides SQL Server and SQL Azure support.  In parallel with that some fine folks inside Microsoft have also created http://wordpress.visitmix.com which contains information about running WordPress on the Microsoft platform with a particular focus on SQL Server and SQL Azure.  Top stuff! To run WordPress with SQL Server: Download and Install the WordPress on SQL Server Distro/Patch And then you will quite likely need to migrate: Check out how to Migrate to Windows and SQL Server by Zach Owens who is moving his blog to Windows and SQL Server Enjoy Related Links Running PHP on IIS on Windows http://php.iis.net/  If PHP is not your thing, then the following Blog engines are .NET based BlogEngine http://www.dotnetblogengine.net/ DasBlog http://www.dasblog.info/ Subtext http://subtextproject.com/ (which happens to power http://geekswithblogs.net where my main blog is http://geekswithblogs.net/iupdateable)

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Web deploy (msdeploy), syncing everything but sites and pools (but include siteDefaults)

    - by jishi
    Today I do the following to sync two webservers but skip all site configuration: msdeploy -verb:sync -source:webServer -dest:webServer,computerName=web25:8080 -skip:objectName=section,absolutePath=system.applicationHost/sites -skip:objectName=section,absolutePath=system.applicationHost/applicationPools However, this effectively also skip the siteDefaults, which I do like to sync (system.applicationHost/sites/siteDefaults) There doesn't seem to be a way to "include" a section, to override the skip directive. And there doesn't seem to be a way to sync only the siteDefaults section from applicationHost either, since source appHostConfig only seem to sync a specified site, and not siteDefaults. Maybe it is possible to "skip" using an Xpath expression or similar, to only skip the nodes, but include , but I find the documentation a bit confusing and my Xpath is rusty.

    Read the article

  • How-To Geek Gets the Microsoft MVP Award, Thanks to You

    - by The Geek
    The How-To Geek has won a Microsoft MVP award for the second year in a row, and it’s all thanks to you, our great readers that keep the site going. Join us for some mutual back-patting and some terrible photography of all the award stuff. Of course, if you’re familiar with the MVP award you’ll probably know that it’s actually for a single person, but in my opinion the award belongs to the entire How-To Geek community, without which this site would be nothing. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • Crawling a Content Folio

    - by Kyle Hatlestad
    Content Folios in WebCenter Content allow you to assemble, track, and access a logical group of documents and/or links.  It allows you to manage them as just a list of items (simple folio) or organized as a hierarchy (advanced folio).  The built-in UI in content server allows you to work with these folios, but publishing them or consuming them externally can be a bit of a challenge.   The folios themselves are actually XML files that contain the structure, attributes, and pointers to the content items.  So to publish this somewhere, such as a Site Studio page, you could perhaps use an XML parser to traverse the structure and create your output.  But XML parsers are not always the easiest or most efficient to use.  In order to more easily crawl and consume a Content Folio, Ed Bryant - Principal Sales Consultant, wrote a component to do just that.  His component adds a service which does all the work for you and returns the folio structure as a simple resultset.  So consuming and publishing that folio on a Site Studio page or in your portal using RIDC is a breeze!  For example, let's take an advanced Content Folio example like this: If we look at the native file, the XML looks like this: But if we access the folio using the new service - http://server/cs/idcplg?IdcService=FOLIO_CRAWL&dDocName=ecm008003&IsPageDebug=1 - this is what the result set looks like (using the IsPageDebug parameter). Given this as the result set, it makes it very easy to consume and repurpose that folio. You can download a copy of the sample component here. Special thanks to Ed for letting me share this component!

    Read the article

  • Any ideas for automatic initial/scripted configuration of a NetApp?

    - by maddenca
    The background is to make a solution where we can automate as much as possible the building up of ‘pods’ that include server/network/storage and that will be built at remote sites. In an ideal world would be that we create a single management server which is preconfigured with DHCP/TFTP/or whatever. This management server is racked with a CISCO UCS, FAS31x0, etc. at a build site and is then transported to the final customer site where on power it almost configures itself, or at least bootstraps itself far enough that a remote skilled 'expert' can complete the setup of the pod. Ideas (doesn't have to resemble 100% of the above) would be helpful.

    Read the article

  • SVN check out to samba directory

    - by Jon H
    I'm trying to svn co to a directory on Ubuntu, shared via samba, to OS X, but I get the following error (in OS X). svn: In directory 'site/product/tests' svn: Can't open file 'site/product/tests/.svn/tmp/text-base/._base.py.svn-base': No such file or directory My smb.conf file includes the following changes: unix extensions = no browseable = yes public = yes writable = yes delete readonly = yes create mask = 0775 directory mask = 0775 valid users = %S read only = no The checkout works fine locally (on the Ubuntu machine). What am I missing? More detail: Later inspection showed that the svn error couldn't find the file with 3, then 2 underscores: .___init__.py.svn-base Whereas listing the directory in OS X showed 2, then 2 underscores: __init__.py.svn-base And listing the same directory in a successful checkout on Ubuntu shows nothing (because it's a temporary directory?) I've tried the mangled = no setting in share settings, to no effect.

    Read the article

  • Is there any way to use arrays in a puppet module (not in template)?

    - by KARASZI István
    I want to use puppet to manage a hadoop cluster. On the machines we have several directories which must be created and set permissions. But i'm unable to add array values for defined methods. define hdfs_site( $dirs ) { file { $dirs: ensure => directory, owner => "hadoop", group => "hadoop", mode => 755; } file { "/opt/hadoop/conf/hdfs-site.xml": content => template("hdfs-site.xml.erb"), owner => "root", group => "root", mode => 644; } } define hadoop_slave( $mem, $cpu, $dirs ) { hadoop_base { mem => $mem, cpu => $cpu, } hdfs_site { dirs => $dirs, } } hadoop_base is similar to hdfs_site. Thanks!

    Read the article

  • 5 Mac Applications For Web And Graphic Design

    - by Jyoti
    In this article free applications useful and effective for the development and creation of websites with your Mac computer. Without further ado, here are 5 Excellent Mac Application for Web and Graphic Design. Fotoflexer : Fotoflexer claims to be “The world’s most advanced online image editor”. It offers completely free access to numerous features such as photo effects, graphics, shapes, morphing, and the creation of collages. You can also integrate and share your art with social sites like MySpace, Flickr, Facebook, and more. This can be an important app if the site you are creating is going to use applications. Simple CSS : With Simple CSS you can create Cascading Style Sheets from scratch or edit them right from the comfort of your desktop. Update styles on multiple pages all at once and reduce the data transfer usage on your page for faster loads. Blender : Blender is an open source software that allows you to create 3D animation with interactive playback leaves you with the option to optimize the style of your site with a few graphics. You can create animations with shades of colors, glossy features, soft shadows and advanced rendering features. JAlbum : Jalbum is a very useful app that allows you to create stylish photo galleries to publish on the web. All you have to do is simply drag selected folders into a pane where any images contained within the folder will automatically be arranged into a photo gallery. You can add several different themes and templates to enhance the appearance of your gallery, later then gain the HTML code and publish the complete gallery onto the web. Colorate : With Colorate you can create harmonized color palettes along with color schemes. Generate these palettes for images, photographs and more.

    Read the article

  • how do I configure IIS to post logs to sql server?

    - by stacker
    how do I configure IIS to post logs to sql server? How want to save ANY log to my site. the site has a lot of views and the data can be big in a very small time. In addition, I want to show analytics of the use, in real time, and for all the time. Is it make sense to save all the logs in sql-server? What is the pros and cons taking this approach have? what other solution can I have?

    Read the article

  • In IIS why do HTTP requests use the host header, and FTP requests do not

    - by Keeno
    So.... In IIS, if you use the in-build FTP you need to combine the FTP host header in the FTP username e.g. www.hello.com|domain/username So, the FTP program gets its "hook" from the username. However, you can connect to the FTP site using www.hello.com:21 over the FTP port. Why then, doesnt the FTP service work the same way as the HTTP service? IIS knows what site to serve back based on the host header after all.... Thanks!

    Read the article

  • Windows XP runs New Hardware Wizard for usb keyboard and mouse, can't find drivers

    - by Randy Orrison
    I have a PC that up until a couple days ago was working fine. I moved it from one site to another and now when I plug in the USB mouse or keyboard (the same ones that were working previously) XP brings up the New Hardware Wizard. Going through it, the correct keyboard and mouse are identified, but XP can't find the drivers. I've tried manually searching for the driver (using the Have Disk option) - the first file it's looking for is in the c:\i386 directory, but that installs a generic HID mouse device; the system then runs the hardware wizard for a new "unknown" device. The system was SP2, I have installed SP3 in hopes that would help, and I've also downloaded and installed the mouse drivers from Dell's site (there are no specific drivers for the keyboard), with no change. Before I completely reinstall XP, is there anything else I should try?

    Read the article

  • Forwarding a subdomain to main domain using Godaddy

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Regarding compatibility of Intel Pentium D 805 CPU with new motherboard

    - by aniruddhabhide
    I currently have an old configuration with Intel Pentium D 805 CPU and Intel D101GGC chipset. Now I am planning to upgrade my system except CPU and hard disk since it doesn't fit in the budget. QUESTION: I am planning to get Gigabyte GA-B75M-D3H Motherboard which has LGA1155 socket. But my processor has PLGA775 socket type. Will my CPU fit in thee new motherboard's socket? LINKS: CPU specs (Intel site): http://ark.intel.com/products/27511/Intel-Pentium-D-Processor-805-2M-Cache-2_66-GHz-533-MHz-FSB New Motherboard specs (Vendor site): http://www.flipkart.com/gigabyte-ga-b75m-d3h-motherboard/p/itmdacp36gegyeqt?pid=MBDDACP2GUBGFPFM

    Read the article

  • Cloud computing - database loading question

    - by workwise
    Following is the situation, I want to know whether what I want is possible in cloud computing and is it the best way for me: 1) My main site has a Database with tables with millions of rows, and entries are added almost every second. 2) I will setup a mysql mirror, so there will be a backup database always in sync with the main one. 3) There are few tens of thousands of images- growing. So say total size of images few tens of gigabytes. I will be keeping the image data also in sync on the backup server. 4) There can be short periods where traffic can go 100X the average traffic. 5) I will be using memcache heavily - most database and even frequently used disk files/images will be in RAM. I want that the main site runs on a dedicated server. The backup server is say an Amazon EC2 instance. Now note that since it is live backup, I need to run a small instance continuously. I want that when I anticipate high traffic, I should be able to run a large instance on the cloud and transfer the traffic there. The main point is - I do not want to spend time in "loading" the database on the large instance, as it typically can take few minutes or even hours (experience). So is it possible to just scale the memory/CPU on demand, and not having to load the database or sync up the filesystem? I want to setup my backup scripts etc just ONCE. Thanks JP

    Read the article

  • Error while starting web application.

    - by Lalit
    0 When you right-click a Web site in the Microsoft Internet Information Services (IIS) Microsoft Management Console (MMC) snap-in, and then you click Start, the Web site does not start and you receive the following error message: The process cannot access the file because it is being used by another process. What have to do. To resolve this issue i got this solution form link http://support.microsoft.com/kb/890015 As: You must use the Netstat.exe utility at the command line to see if another process is using port 80 or port 443. But how to ensure that is these Ip are in use or not ? in terms of status ? What should its status ? Second solution is : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\ListenOnlyList. But this key is not found .

    Read the article

  • Idenifying the Ipaddress of the Folders in the BLuehost server…

    - by Aruna
    Hi, we have hosted our site in Bluehost server.We are having 2 websites running by bluehost server. In our bluehost server-file manager we have 2 separate folders namely abc,xyz which is pointing to the site abc.com and xyz.com . I dont know how to find the Ipaddress of those folders. Note: We faced some prblms in abc.com and we have redirected abc.com to xyz.com. I am trying to find the IP address of abc.com and xyz.com .. How to find so in the bluehost server.

    Read the article

  • Finding a laptop to fit my spec's

    - by Mick
    I have a small list of characteristics that are required for a new laptop and have been looking for websites where I can specify my requirements and see which laptops fit the bill. I want to specify OS/RAM/HDD size/CD drive/screen resolution/battery life. I have found several sites so far where I can specify everything except battery life. Does such a site exist? I know that this is not a shopping site - but before any moderator rushes to close this question - please note that I am not asking "where do I buy this laptop" - merely what laptop fits this specs. I don't even care what country the website is based in.

    Read the article

  • Nagios3 check_httpname gives 503 response; from command line I get a 200 response

    - by Michael T. Smith
    We're using Nagios to monitor our site (and a bunch of other stuff.) For some odd reason, when I test out the command /usr/lib/nagios/plugins/check_http -H 'domainname.com' the response that comes back is HTTP/1.1 200 OK but when I set up the service to do it: # Check that domain is running define service { hostgroup_name hostgroup service_description host site check_command check_httpname!domainname.com use generic-service notification_interval 1; set > 0 if you want to be renotified } the response that comes back is HTTP/1.1 503 Service Unavailable. Does anyone know why this would be happening?

    Read the article

  • Where is a good place to start to learn about custom caching in .Net

    - by John
    I'm looking to make some performance enhancements to our site, but I'm not sure exactly where to begin. We have some custom object caching, but I think that we can do better. Our Business We aggregate news stories on a news type of web site. We get approximately 500-1000 new stories per week. We have index pages that show various lists of the items and details pages that show the individual stories. Our Current Use case: Getting an Individual Story User makes a request The Data Access Layer(DAL) checks to see if the item is in cache and if item is fresh (15 minutes). If the item is not in cache or is not fresh, retrieve the item from SQL Server, save to cache and return to user. Problems with this approach The pull nature of caching means that users have to pay the waiting cost every time that the cache is refreshed. Once a story is published, it changes infrequently and I think that we should replace the pull model with something better. My initial thoughts My initial thought is that stories should ALL be stored locally in some type of dictionary. (Cache or is there another, better way?). If the story is not found, then make a trip to the database, update the local dictionary and send the item back. Since there may be occasional updates to stories, this should be an entirely process from the user. I watched a video by Brent Ozar, How StackOverflow Scales SQL Server, in which Brent states "the fastest database query is the one that you don't make". Where do I start? At this point, I don't know exactly what the solution is. Is it caching? Is there a better way of using local storage? Do I use a Dictionary, OrderedDictionary, List ? It seems daunting and I'm just looking for some good starting points to learn more about how to do this type of optimization.

    Read the article

  • new project; entire node.js app

    - by Jared
    I have been looking into Node.js, express and Nowjs and love how easy it is to have real time interactions between clients. My background is mostly from CodeIgniter MVC using PHP and MYSql. I want to re make a current web project of mine from scratch to make everything better and more real time with this newer technology. After researching and doing test examples I want to use node.js , express and Nowjs for the real time interactions once someone connects to the socket.io to pull data back to clients. But use Code Igniter for the control of the site and user management , possible shopping cart/store , pretty much everything else. This is purely due to time constraints and that I am already familiar with doing it that way. I have been looking at MongoDB as an alternative to MySql, Basically the app is going to be multiple chat rooms all on one page. with the ability of notifications and private messaging. Lots of data transfer and images. before I started piecing it together I wanted to get people who have already done something similar. My model would use Code Igniter and MySQL to render the page and then connect them onto a node.js server and broadcast using express and nowjs would using a mongoDB be better than mySQL for tons of messages and data being stored or MYSQL? Also does it make since to not make the whole site on Node.js , kinda piece it together like that? I was asked to re post this somewhere else as it was not up to the format for SO, OP from here http://stackoverflow.com/questions/12649469/new-project-need-some-start-up-advice-node-js-app#comment17062924_12649469

    Read the article

  • Google Mini Search Does Not Return Results

    - by James Lawruk
    I pointed the Google Mini to a small intranet site. (8 simple HTML pages) It crawls all 8 pages just fine, but a simple search with a word clearing contained within the body of the pages returns 0 results. If I enter a search like this: "site:mysite.net", it returns all 8 pages. If I enter a search with a word contained in a Url, it will return that Url. In the search results, only the Url is returned in the list items. There is no Page Title or blurb text you normally see in Google search results. It's as if it is only indexing the Url and skipping the page title, body, etc. I have an old software Version 3.4.14, but I wanted to try to get this thing working without an upgrade. It worked before for another domain, so why would it not work now. Any ideas?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >