Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 163/399 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Best Free software for hosting user guides

    - by Hippyjim
    Hi All After having to clean up spam from a MediaWiki install for the umpteenth time, despite a recaptcha plugin "preventing" automated signups, I'm wondering if MediaWiki is the right choice as a CMS for hosting user manuals and guides. I've always loved the way wikis can let the guides be edited and commented on collaboratively, but I'm getting tired of dealing with automated vandals. I've disabled edits & signups for now, but as I'm having to go through the pain of cleaning thousands of junk pages, I'm beginning to think I should cut my losses and look for a better alternative. Does anyone know of suitable a FOSS application (preferably PHP / MySQL based) that would be simple for a non-coder (our manual writer) to edit, but that has all the interconnectivity, and searchability of a wiki? Or should I just bite the bullet again and lock the wiki down even further?

    Read the article

  • User Experience Guidance for Developers: Anti-Patterns

    - by ultan o'broin
    Picked this up from a recent Dublin Google Technology User Group meeting: Android App Mistakes: Avoiding the Anti-Patterns by Mark Murphy, CommonsWare Interesting approach of "anti-patterns" aimed at mobile developers (in this case Android), looking at the best way to use code and what's in the SDK while combining it with UX guidance (the premise being the developer does the lot). Interestingly, the idea came through that developers need to stop trying to make one O/S behave like another--on UX grounds. Also, pretty clear that a web-based paradigm is being promoting for Android (translators tell me that translating an Android app reminded them of translating web pages too). Haven't see the "anti"-approach before, developer cookbooks and design patterns sure. Check out the slideshare presentation.

    Read the article

  • dynamic urls and links on one web page

    - by John
    I am trying to figure out how to create dynamic links and urls on a static webpage. What I want to do is the following: I have a single webpage for example: MYWEBPAGEdotCOM/INDEX.HTML that will always look the same, except for one link on the page. the link would be on the page for example: LINK TO AFFILIATE: affiliatedotCOM/my-affiliate_code_here_DYNAMIC_REFERER the only thing would change is the "DYNAMIC_REFERER" with every dynamic url on this page: MYWEBPAGEdotCOM/INDEX.PHP_id=test1 MYWEBPAGEdotCOM/INDEX.PHP_id=test2 MYWEBPAGEdotCOM/INDEX.PHP_id=test3 MYWEBPAGEdotCOM/INDEX.PHP_id=test4 which would only hange the dynamic link on the page to: affiliatedotCOM/my-affiliate_code_here_test1 affiliatedotCOM/my-affiliate_code_here_test2 affiliatedotCOM/my-affiliate_code_here_test3 affiliatedotCOM/my-affiliate_code_here_test4 Can someone tell me how I could go about doing this? I just dont want to have to make 100's of pages, as this would prevent me from having to do so.

    Read the article

  • .XML Sitemaps and HTML Sitemaps Clarification

    - by MSchumacher
    I've got a website with about 170 pages and I want to create an effective Sitemap for it as it is long due. The website is internally linked very well but I still want to take advantage of creating a sitemap to allow SE's to crawl my site easier and to hopefully increase my websites PR. Though I am slightly confused to what I must do: Is it necessary to create a .xml sitemap AND a HTML Sitemap (both)? ... Because I've never worked with .xml ... where do I put this file once it's created? In the Root folder? So I assume that this sitemap.xml is ONLY to be read by spiders and NOT by website visitors. IE: No visitor on my website is going to visit the page sitemap.xml, am I correct? ... Hence why I should also create an HTML sitemap (sitemap.htm)?

    Read the article

  • Switching CSS to use asset pipeline in Rails?

    - by John
    I have a lot of legacy CSS files from what was a Rails 2.x app that got upgraded to Rails 3.2.8, and I want to switch over to using the Rails asset pipeline for stylesheets. The issue is, the CSS stuff is messy in terms of huge lines of code, duplicate file names, and unorganized folder structure. After looking through individual pages, and trying to add individual stylesheets and folders into the asset pipeline and spending some cycles debugging, I realized there's probably a better approach. Is there a way to test to make sure the old CSS matches up with the asset pipeline CSS? What are some good tools for testing and debugging CSS?

    Read the article

  • Running 32-bit SSIS in a 64-bit Environment

    - by John Paul Cook
    After my recent post on where to find the 32-bit ODBC Administrator on a 64-bit SQL Server, a new question was asked about how to get SSIS to run with the 32-bit ODBC instead of the 64-bit ODBC. You need to make a simple configuration change to the properties of your BIDS solution. Here I have a solution called 32bitODBC and it needs to run in 32-bit mode, not 64-bit mode. Since I have a 64-bit SQL Server, BIDS defaults to using the 64-bit runtime. To override this setting, go to the property pages...(read more)

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Font rendering in Firefox is blurry

    - by Mehrdad
    A picture is worth a thousand words... so does anyone know how to fix this font blurriness in Firefox? (You'll need to right-click the picture below go to View Image to view it full-size; it's too small to see anything here.) Note: My other applications (and the Firefox non-client area, as you can see in the screen) are completely fine, so obviously going to System-Appearance and changing the font settings isn't fixing the situation. Edit: Not letting web pages to use their own fonts also doesn't help: See how the upper one is still sharper? Also, Firefox's own menu bar doesn't render the same way as the page content (menu bar below, page content above). They're both Segoe UI:

    Read the article

  • What good Social Networking Site solutions there is?

    - by ZetsubouWebmaster
    What good free Social Networking Site solutions there are? I tried many options but most of them are either too complicated, too simple, or just do not work... I tried: Dolphin, DZOIC-Handshakes, elgg, Oxwall, SocialEngine, and some plugins for wp and other cms. I don't need much, just: groups, chats, forums, profiles, PM, photos, pages, comments, search, statistics. Most of which included in pretty much every CMS out there... but not all... So, what good solutions there are? Also I don't mind paying some money (i guess no more then 200$), but I'd prefere if it was a free open source engine. Of course it should be php+mysql based.

    Read the article

  • Canonical url for a home page and trailing slashes

    - by serg
    My home page could be potentially linked as: http://example.com http://example.com/ http://example.com/?ref=1 http://example.com/index.html http://example.com/index.html?ref=2 (the same page is served for all those urls) I am thinking about defining a canonical url to make sure google doesn't consider those urls to be different pages: <link rel="canonical" href="/" /> (relative) <link rel="canonical" href="http://example.com/" /> (trailing slash) <link rel="canonical" href="http://example.com" /> (no trailing slash) Which one should be used? I would just slap / but messing with canonical seems like a scary business so I wanted double check first. Is it a good idea at all for defining a canonical url for a home page?

    Read the article

  • Oracle Solaris 11.1 Security Lab

    - by user12608073
    Recently I developed a set of lab exercises for an Oracle OpenWorld Hands On Lab, entitled HOL10201, Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications. This explored the new Extended Policy for privilege assignments in Oracle Solaris 11.1.  Today, Oracle Solaris 11.1 has been officially released via the Package Repository. Today's release and branch are numbered 0.5.11-0.175.1.0.0.24.2, which means it is based on build 24b of 11.1 which is, in turn, based on build 175a of 11.0.  There is a good summary of new features available here: Oracle Solaris 11.1 - What's New . Pages 5 thru 7 give an overview of some of the new security enhancements. There is much more information available in the newly published documentation for Oracle Solaris 11.1. I plan to explore some of these enhancements in a series of blog entries. Meanwhile, I've published a copy of the lab materials, which you can try out with this new release.

    Read the article

  • « On explore sur l'iPad, on achète sur un ordinateur » : une étude de Miratech

    « On explore sur l'iPad, mais on achète sur un ordinateur » Une étude de Miratech Miratech a réalisé une étude comparant la navigation Internet entre iPad et ordinateur. Conclusion : l'ordinateur est beaucoup plus efficace pour des tâches spécifiques et l'iPad est plus ludique pour explorer du contenu. Voici le contenu de l'étude : Un échantillon de 20 utilisateurs équi-répartis, familiers avec l'iPad, a été testé dans nos laboratoires de test utilisateur. Il leur était demandé de naviguer sur cinq sites web connus en France disposant aussi d'une application iPad (Amazon, La Redoute, Allociné, Les Pages Jaunes et Voyages SNCF). Nous avons testé les na...

    Read the article

  • Connecting Google Analytics with Custom Search Engine AdSense

    - by Yochai Timmer
    I have a Custom Search Engine that I've created with AdSense. I've put that search engine as a site search in my Google Sites page. I've connected both the Custom Search Engine and the Google Site to my Analytics page via their settings pages. Now, I'm trying to get Analytics to show me the AdSense for Search statistics. I've managed to connect the Google Sites page, to the Analytics, and I can see the search statistics in the Analytics as well. But I can't get it to show the actual AdSense for Search statistics from the Custom Search Engine. How can I configure everything so I can get the AdSense for Search statistics of my Custom Search Engine in my Analytics page?

    Read the article

  • Ask the Readers: Favorite Web Clipping Tool?

    - by Jason Fitzpatrick
    Bookmarking is great if you want a link to visit later, but what if you want to save the page itself for later perusal? This week we want to hear all about your favorite web clipping tool and how you use it to read what you want, when you want it. Web clipping tools are simple tools (browser extensions, bookmarklets, etc.) that make it easy to clip text and multimedia elements from web pages in order to archive them and/or read them at a later date. Whether you clip to a bursting at the seams web-notebook or you clip to send to your Kindle, we want to hear about your favorite tools and how they fit into your reading workflow. Sound off in the comments and then make sure to check back on Friday for the What You Said roundup where we highlight popular picks and clever tips. HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way

    Read the article

  • Is there a Content Management System that allows multiple & independent blogs to be running on one domain?

    - by Ron
    Hello Webmasters, I am a Wordpress fan, and I'm now building a new site and I'm not sure which CMS can achieve what I'm trying to do. I am building a food blog network for a bunch of cities in the US, and I want to my city pages to be independently running blogs themselves. So basically... Home Page - Its own blog with its own users, talking about Food in general Dallas Page (child of home page) - Its own blog with its own users Chicago Page ..... so on and so forth. The web layout and design will be all the same, but just trying to achieve 25~50 independent blogs on one domain. How can I achieve this? I'm hoping that I don't have to install Wordpress into as many subdomains that I create... Thank you for your help in advance. -RP

    Read the article

  • Website development from scratch v/s web framework [duplicate]

    - by Ali
    This question already has an answer here: What should every programmer know about web development? 1 answer Do people develop websites from scratch when there are no particular requirements or they just pick up an existing web framework like Drupal, Joomla, WordPress, etc. The requirements are almost similar in most cases; if personal, it will be a blog or image gallery; if corporate, it will be information pages that can be updated dynamically along with news section. And similarly, there are other requirements which can be fulfilled by WordPress, Joomla or Drupal. So, Is it advisable to develop a website from scratch and why ? Update: to explain more as got commentt from @Raynos (thanks for comment and helping me clearify the question), the question is about: Should web sites be developed and designed fully from scratch? Should they be done by using framework like Spring, Zend, CakePHP? Should they be done using CMS like Joomla, WordPress, Drupal (people in east are using these as frameworks)?

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • Replace %26 in htaccess to %2526

    - by Patrick
    I would like htaccess to rewrite example.com/something_%26_else into example.com/something_%2526_else. I'm importing a bunch of pages that have ampersands in the title from Mediawiki. These are encoded as %26. Drupal, for various reasons, has decided double encode the url it to have it become %2526. I simply can't create the alisis within Drupal so I have to use htaccess This is what I have as my rule so far as RewriteRule ^w/([^%26]+)\%26(.*)$ w/$1\%2526$2 [R=301] I asked this question three months ago on stackexchange and was not able to get it working. I tried hiring a contractor for this but was unable to find one. So this my last ditch effort before I completely give up. I really appreciate the help. All the best, Patrick

    Read the article

  • Is it true that the Google Spider gives the most relevance of a search result to the first 68 characters of the <title>?

    - by leeand00
    I am reading documentation about my CMS and it states that an HTML page <title> tag is really important in SEO. It states that the Google Spider gives the most relevance to the first 68 characters of a site title. (68 characters being the number of characters that Google will display in it's search engine result pages,) Can anyone verify this is still true? I read in The Information Diet that content farms were getting too good at gaming Google's algorithm for collecting and posting SERPs and so google had to change the search algorithm.

    Read the article

  • URL blocked in robots.txt but still showing up on Google search [closed]

    - by Ahmad Alfy
    Possible Duplicate: Why do Google search results include pages disallowed in robots.txt? In my robots.txt I am disallowing a lot of URLs. Google webmaster tools says there're +750 URL blocked. The problem is the URLs are still showing on Google search. For example I have the following rule: Disallow: /entity/child-health/ But when I search some-keyword + child health the following URL shows up : http://www.sitename.com/entity/child-health/ Am I doing anything wrong? Is is possible for a URL to be blocked using robots.txt and still show up on search results?

    Read the article

  • How do I set a static DNS nameserver address on Ubuntu Server?

    - by Aleks
    I am trying statically to set DNS server addresses in my Ubuntu server running as virtual machine. I followed all recommendations on official Ubuntu support pages but I simply cannot get rid of my ISP's DNS servers set by DHCP. I assigned br0 interface on my host machine static IP address and eth0 on VM to use Google DNS and my own local DNS running on the second vm by setting it in /etc/network/interfaces. Tried to fiddle with head base and tail files in /etc/resolvconf/resolv.conf.d/ and tried to shuffle interface-order in /etc/resolvconf/interface-order but when I restarted network service I got the ISP's DNS addresses back every time. Is there a way that I can disable resolvconf and set up my resolv.conf file manually as I always did on Red Hat? Or at can you tell me which hook script keeps putting ISP DNSs in resolv.conf? My ISP don't allow me to change DHCP settings on my router so I cannot do it that way. Why is such a simple thing such as setting DNS servers got so complicated???

    Read the article

  • Tuning Red Gate: #2 of Many

    - by Grant Fritchey
    In the last installment, I used the SQL Monitor tool to get a snapshot view of the current state of the servers at Red Gate that are giving us trouble. That snapshot suggested some areas where I should focus some time, primarily in which queries were being called most frequently or were running the longest. But, you don't want to just run off & start tuning queries. Remember, the foundation for query tuning is the server itself. So, I want to be sure I'm not looking at some major hardware or configuration issues that I need to address first. Rather than look at the current status of the server, I'm going to look at historical data. Clicking on the Analysis tab of SQL Monitor I get a whole list of counters that I can look at. More importantly, I can look at them over a period of time. Even more importantly, I can compare past periods with current periods to see if we're looking at a progressive issue or not. There are counters here that will give me an indication of load, and there are counters here that will tell me specifics about that load. First, I want to just look at the load to understand where the pain points might be. Trying to drill down before you have detailed information is just bad planning. First thing I'm going to check is the CPU, just to see what's up there. I have two servers I'm interested in, so I'll show you both: Looking at the last 30 days for both servers, well, let's just say that the first server is about what I would expect. It has an average baseline behavior with occasional, regular, peaks. This looks like a system with a fairly steady & predictable load that probably has a nightly batch process that spikes the processor. In short, normal stuff. The points there where the CPU drops radically. that might be worth investigating further because something changed the processing on this system a lot. But the first server. It's all over the place. There's no steady CPU behavior at all. It's spike high for long periods of time. It's up, it's down. I'm really going to have to spend time looking at CPU issues on this server to try to figure out what's up. It might be other processes being shared on the server, it might be something else. Either way, I'm going to have to spend time evaluating this CPU, especially those peeks about a week ago. Looking at the Pages/sec, again, just a measure of load, I see that there are some peaks on the rg-sql02 server, but over all, it looks like a fairly standard load. Plus, the peaks are only up to 550 pages/sec. Remember, this isn't a performance measure, but just a load measurement, but from this, I don't think we're looking at major memory issues, but I may want to correlate these counters with the CPU counters. Again, the other server looks like there's stuff going on. The load is not at all consistent. In fact there was a point earlier in the year that looks pretty severe. Plus the spikes here are twice the size of the other system. We've got a lot more load going on here and I will probably need to drill down on memory usage on this server. Taking a look at the disk transfers/sec the load on both systems seems to roughly correspond to the other load indicators. Notice that drop right in the middle of the graph for rg-sql02. I wonder if the office was closed over that period or a system was down for maintenance. If I saw spikes in memory or disk that corresponded to the drip in CPU, you can assume something was using those other resources and causing a drop, but when everything goes down, it just means that the system isn't gettting used. The disk on the rg-sql01 system isn't spiking exactly the same way as the memory & cpu, so there's a good chance (chance mind you) that any performance issues might not be disk related. However, notice that huge jump at the beginning of the month. Several disks were used more than they were for the rest of the month. That's the load on the server. What about the load on SQL Server itself? Next time.

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • Announcing the Mastering SharePoint 2013 Development lab

    - by Erwin van Hunen
    If you’re a seasoned SharePoint developer and you’d like to get up and running with all the new goodies that SharePoint 2013 is bringing, make sure you check out the Mastering SharePoint 2013 Development lab I’m giving at LabCenter in Stockholm, Sweden. 3 days of development heaven *and* you take away a brand new laptop, or an iPad, or some of the other perks you decide to go for. Check out: http://www.labcenter.se/Labs#lab=Mastering_Sharepoint_2013_Development The overview of the 3 days: Day 1 Module 1: Comparing SharePoint 2013 to SharePoint 2010 What’s new in SharePoint 2013 Module 2: Installing your SharePoint 2013 development environment How to successfully (and above all correctly) install SharePoint 2013 Day 2 Module 3: Apps, sandboxed or full trust? What’s the difference between the deployment models. Pro’s and con’s Code or no-code solutions? Module 4: Search is the new black Using the new out of the box Search webparts Building a search based solution Day 3 Module 5: Workflows Differences between SharePoint 2010 workflows and 2013 workflows Building a workflow using Visio and SharePoint Designer Building a workflow using Visual Studio Module 6: You’re the master of the design The design manager Master pages Page layouts CSS and HTML5

    Read the article

  • Reducing brightness of large areas containing bright colours

    - by intuited
    I do most of my work in either a terminal or a web browser. I prefer my terminals to use bright colours on dark. I would really prefer that web pages tended to look this way as well, but that's not under my control. The problem is that when I switch from a light-on-dark terminal to a dark-on-light web page (like this one), my eyes have to adjust to the overall rise in screen brightness. Apparently this is bad for your eyes, in addition to being painful and annoying. It would seem to be possible for some layer of the interface to adjust the displayed colours for parts of the screen, or perhaps for particular windows, to reduce the brightness of the brighter areas of the screen. Can this be done, possibly with a Compiz extension?

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >