Search Results

Search found 11753 results on 471 pages for 'technical links'.

Page 111/471 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Unable to run 'sudo apt-get dist-upgrade' due to authentication issues

    - by TobyG
    I've just attempted to run sudo apt-get dist-upgrade on my Ubuntu box, but am getting the following error... WARNING: The following packages cannot be authenticated! librdbmspp php5-ioncube-loader sw-libboost-date-time1.49.0 sw-libboost-system1.49.0 sw-libboost-filesystem1.49.0 sw-libboost-program-options1.49.0 sw-libboost-regex1.49.0 sw-libboost-serialization1.49.0 sw-libpoco I've tried running... $ sudo apt-key update $ sudo apt-get update ... as found in this question, but I'm still getting the error. Can anyone help, please? Update on 5th June Repos currently in /etc/apt/sources.list (links broken due to reputation being too low to include more than 2 links)… deb http: //gb.archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http: //gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http: //gb.archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http: //archive.canonical.com/ubuntu precise partner deb-src http://archive.canonical.com/ubuntu precise partner deb http: //security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http: //autoinstall.plesk.com/ubuntu/PSA_11.5.30 precise all deb http: //autoinstall.plesk.com/debian/SITEBUILDER_11.5.10 all all deb http: //autoinstall.plesk.com/debian/BILLING_11.5.30 all all

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • Is there a schematic overview of Ubuntu's architecture?

    - by joebuntu
    Hi there, as enthusiastic, advanced Linux learner, I'd love to get an overview about Linux' architecure/structure in general. You know, like "the big picture". I'm thinking of a large schematic graphic showing what is what, who is who, what system (e.g. X) comprises which subsystems (GDM/Gnome/Compiz) on the way from a to z, from boot to interactive desktop, including the most important background services (auth, network, cron, ...). Maybe a bit like this: http://www.flickr.com/photos/pgc/140859386/ but way more detailed. There's bootchart, which produces very comprehensive charts, but they again are too detailed and difficult to get the "big picture" from. Is there such a thing? Possibly not for the whole System, but maybe for single subsystems? I had trouble searching for this, because using search terms like "scheme" or "architecture" pointed to the wrong direction (a tool called "scheme" or CAD software for linux). I appreciate any links. If there's interest in those schematic overviews and links, maybe someone could turn this post into a wiki post? Cheers, joebuntu

    Read the article

  • How to get local business nationwide exposure? [closed]

    - by guisasso
    here's the situation: This company offers local home services (construction...), but also fabricates many custom items that can be shipped nationally, and even internationally. Since i started working on this website, it has ranked pretty well on alexa global and locally, and i have made many SEO improvements that doubled the visits to the website in 6 months. The website is listed in many different directories (dmoz & etc...), maps (google maps & etc...), business listing sites (yelp & etc..), trade specific websites (angie's list, houzz & etc...), state specific business listings and etc, there are many links to pictures displayed on the website, links to the website itself, i have a google analytics and webmaster tools account, with sitemaps, newsletters, facebook page.... the list goes on and on. All of which have been working pretty well locally. We have had some success with doing business in other states and even other countries, but it is still a pretty small percentage of the market. I also advertise on google adwords locally, and since this would be the obvious answer, my question is: Without paid advertisement, how can i improve the visibility of this local business website nationally to attract customers in all US States?

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • Where can I find Ad Networks with single liner Ads?

    - by MaX
    I've developed a site that serves pure HTML Weather widgets (and they are great looking too). Just after two months I am generating 1.25K hits monthly (Google Analytics). Now I want to generate some money out of it. You can check my service out on Here . I am looking for affiliate or an Ads service that can I can hookup within but there is a twist in story. I want single liner text Ad in a particular location otherwise widgets will look rubbish, see this snapshot: Plus I have some unique places in my site to place some banner ads as well, Here are existing set of services that I've already tried: Ad Sense, doesn't allow or have such formats of methods. Peefly provides you with straight links works best but I recorded some clicks (Through Google Events) and they didn't show me any, plus it introduces overhead of manually going and choosing your links. BidVertise totally rubbish opens popups and what not, makes site look like spam I am new to this ad stuff so have a limited knowledge. Suggestions please? I have one more place in Forecast but I want to start simple. P.S. I also have a MetroUI like widget coming in the pipeline but its not ready yet.

    Read the article

  • Documents stored on separate internal drive, Ubuntu doesn't notice on startup

    - by PlanoAlto
    My machine has Windows 7 Ultimate x64 and Ubuntu 12.04 LTS running side-by-side on a single hard drive with GRUB bootloader, each with 500 GB storage. I keep my personal documents on a separate 1TB hard drive so they remain isolated from any changes I make to the OS drive, but when Ubuntu starts it does not seem to notice my documents drive. While I've installed and worked with Ubuntu 12.04 Server x32 before, using it as a desktop OS is new to me. I use my documents drive for all of my personal data, including wallpapers and music, so it is imperative that Ubuntu recognize it on startup. Concerning the two specific examples: Ubuntu loads with the default blue-colored desktop instead of my desired picture of the spectacular Carina galaxy. When I right-click the desktop and select "Change Desktop Background", it wakes up from its amnesia and loads the proper background. As for my music, Rhythmbox defaults to an empty library upon reboot, forcing me to reload the settings manually each time. This gets quite tedious because I certainly can't work to my full potential without my music. The second thing I would like to address is making Ubuntu point the documents directories in ~ to their appropriate counterparts on the 1TB documents drive. I realize that this question is not new, but when I create the symbolical links, they established themselves inside the directories and did not convert the directories themselves into symbolical links. I also prefer not to move the files themselves from their current location on the 1TB drive. I believe this would also help the Rhythmbox library problem as well considering it's a default directory for the music player. Excerpt from fstab: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb6 during installation UUID=057ac83e-76ad-460d-86e5-b6d46e9b1d80 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb7 during installation #UUID=1183df90-23fc-44e4-aa17-4e7c9865d5cb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 That's enough content for one question. I really like the Ubuntu experience so far since it doesn't treat me like an idiot out of the box (can't say the same for Windows) so I can't wait to hear from the community! Thanks for your help in advance.

    Read the article

  • What exactly are Link Relation Values?

    - by bckpwrld
    From REST in Practice: Hypermedia and Systems Architecture: For computer-to-computer interactions, we advertise protocol information by embedding links in representations, much as we do with the human Web. To describe a link's purpose, we annotate it. Annotations indicate what the linked resource means to the current resource: “status of your coffee order” “payment” and so on. We call such annotated links hypermedia controls, reflecting their enhanced capabilities over raw URIs. ... link relation values, which describe the roles of linked resources ... Link relation values help consumers understand why they might want to activate a hypermedia control. They do so by indicating the role of the linked resource in the context of the current representation. I interpret the above quotes as saying that Hypermedia control contains both a link to a resource and an annotation describing the role of linked resource in the context of the current representation. And we call this annotation ( which describes the role of linked resource ) a link relation value. Is my assumption correct or does the term link relation value actually describe something different? Thank you

    Read the article

  • Release Notes for 5/18/2012

    Here are the notes for this week’s release: Pull Requests We’ve added the ability to see the snippets of code where a user commented inline in the discussion of pull requests. You can also add another line comment directly from the discussion area, rather than navigating to the code diff viewer. Note that there’s currently a known issue where the line associated with the comment isn’t being properly differentiated for existing pull requests (the line in the middle of each diff preview should be bolded). Apologies for the inconvenience! As part of this work, we also took some time to clean up our diff viewer UI to remove the dots and introduce a new color scheme where green is used for added lines. Bug Fixes Fixed an issue affecting the ability to assign pull requests. Fixed an issue where managing various team resources for a project was not working in Chrome or Firefox. Fixed an issue where a project’s RSS subscribe dialog popped up in the wrong place. Fixed an issue where editing wiki anchor links would insert extra characters, resulting in broken links. Fixed an issue where project logos did not display correctly when browsing the site with https in Chrome or Firefox. Fixed an issue where users could encounter errors when deleting remote Git branches. Fixed an issue affecting the ability of fork collaborators to push changes to the fork. Fixed an issue where the advanced work item filters would not persist when navigating through result pages. Fixed an issue where the issue tracker notifications link was not clickable in Chrome. Fixed an issue where pull request comments with line breaks would not be formatted properly when viewing the pull request. Other We upgraded our Git servers to version 1.7.10.1. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Website only displays correctly in IE using compatibility mode?

    - by user21318
    As the resident "IT Whiz" at work (ie: I know how to use a computer) I've been asked to develop a website for our small business. I've altered a wordpress theme for the time being and the company is very happy with the results. The only problem I am having with it at the moment is that for some reason the website does not display correctly and Internet Explorer unless I run it in Compatibility Mode. The main problem that I have is that my menu "slider" (it rotates pictures with links to articles etc) does not display at all, neither does the top menu they are just blink text based links. Even with Compatibility Mode enabled the slider and menus come back but the page is not centered unlike on both Firefox and Chrome. My googling has suggested the most common cause of this is old code but I'm not sure where to be looking. Is it likely in the css file or the actual php? Also any ideas on how to trouble shoot the cause of this? As in is there some dev tools or debugger I can use that would highlight "broken" code for me?

    Read the article

  • Lost all non-linux hard drives

    - by Rick
    I've somehow lost access to all my non-linux drives. I'm not sure how. I have two other hard drives set up (a windows boot and an external usb I was using under windowsxp). A few days ago I could access both under linux with no problem. Now, I see them listed under media, but when I click on them, nothing is there (same thing happens when I go there in a term window). Now, the computer (under linux) sees them, but I can't see anything in them. (in nautilus it shows "(Empty)") Any ideas what happened or how to get access to them again? My guesses as to what might have caused it: -I was trying to set up AdobeAIR (unsuccessfully) and had to make and remove a couple of symbolic links. Never got that to work but maybe links I made or removed did something? -I was also trying to set up nautilus so that I could enter it with sudo permission. I also didn't figure out how to do that successfully, but maybe something I did while trying messed it up? (I think I installed and uninstalled nautilus-actions configuration tool). Note: I'm working on a relatively new install of 12.04

    Read the article

  • How to remove buttons in AnythingSlider and stop the sliding?

    - by user244394
    I'm currently using the anythingSlider it works quite well. But if there is one li, how do i make it stop sliding and remove the buttons displayed below? The *li*s are generated from the database so sometimes there's only one. I want the buttons to show only when there is more then one image. if there is one image all the buttons( back, forward, pause) should not be displayed. Does anybody know of a way of stopping it sliding if there's only one li and removing the buttons when there is only one image. ===================================================================================== Thank You. Currently I have the working version, posted below the old code is working. I tried to replace it with yours it didnt seem to work. Is there something else that needs to be added? function formatText(index, panel) { return index + ""; } $(function () { $('.anythingSlider').anythingSlider({ easing: "easeInOutExpo", // Anything other than "linear" or "swing" requires the easing plugin autoPlay: true, // This turns off the entire FUNCTIONALY, not just if it starts running or not. delay: 7500, // How long between slide transitions in AutoPlay mode startStopped: false, // If autoPlay is on, this can force it to start stopped animationTime: 1250, // How long the slide transition takes hashTags: true, // Should links change the hashtag in the URL? buildNavigation: true, // If true, builds and list of anchor links to link to each slide pauseOnHover: true, // If true, and autoPlay is enabled, the show will pause on hover startText: "", // Start text stopText: "", // Start text navigationFormatter: formatText // Details at the top of the file on this use (advanced use) }); $("#slide-jump").click(function(){ $('.anythingSlider').anythingSlider(6); }); }); UPDATED WITH YOUR CODE : function formatText(index, panel) { return index + ""; } $(function () { var singleSlide = true, options = { autoPlay: false, // This turns off the entire FUNCTIONALY, not just if it starts running or not. buildNavigation: false, // If true, builds and list of anchor links to link to each slide easing: "easeInOutExpo", // Anything other than "linear" or "swing" requires the easing plugin delay: 3000, // How long between slide transitions in AutoPlay mode animationTime: 600, // How long the slide transition takes hashTags: true, // Should links change the hashtag in the URL? pauseOnHover: true, // If true, and autoPlay is enabled, the show will pause on hover navigationFormatter: formatText // Details at the top of the file on this use (advanced use) }; // Add player options if more than one slide exists if ( $('.anythingSlider div ul li').length 1 ) { $.extend(options, { autoPlay: true, startStopped: false, // If autoPlay is on, this can force it to start stopped startText: "", // Start text stopText: "", // Start text buildNavigation: true }); singleSlide = false; } // Initiate anythingSlider $("#slide-jump").click(function(){ $('.anythingSlider').anythingSlider(6); }); // hide anythingSlider navigation arrows if (singleSlide) { $('.anythingSlider a.arrow').hide(); } }); HTML TAGS ========================================================================================= Update May 25 ,2010 When using $('.anythingSlider').anythingSlider(options); instead of $('.anythingSlider').anythingSlider(6); the slider runs but I noticed that I get a Javascript Error :Object required is there anything else I need to pass? Since before anythingSlider was taking 6 instead of options, where do I pass that 6, since its looking for it.

    Read the article

  • jquery Tab - Open Link in current panel does not work

    - by Maik Koster
    Hi, I just started playing around with the Jquery ui tabs. The content of the Tabs consist mainly of static content at the beginning. Now some of the content within the panels do have Links to some kind of subcontent. So if the User clicks on a link in the panel I would like to replace the content of the current panel with the content coming from the link. So I used the script directly from the jquery ui tab documentation but I can't get it to work. It is always opening the link directly, not within the panel. The code I use for testing is quite simple: <div id="MyTabs"> <ul> <li><a href="#TestTab1">TestTab</a></li> <li><a href="#TestTab2">TestTab</a></li> </ul> <div id="TestTab1"> Lorem ipsum dolor. dumm di dumm <a href="http://mywebserver/somelink">Test</a> </div> <div id="TestTab2"> Lorem ipsum dolor. dumm di dumm 2 <a href="http://mywebserver/somelink2">Test 2</a> </div> </div> <script type="text/javascript"> $(document).ready(function() { $('#MyTabs').tabs({ load: function(event, ui) { $('a', ui.panel).click(function() { $(ui.panel).load(this.href); return false; }); } }); }); Additionally, if I have the content of the panel loaded using an AJAX call no link within the panel is working whatsoever. Any idea what I`m doing wrong? Help is really appreciated Regards Maik Edit1: OK, I got a bit further. I replaced the Javascript with the following snippet: $(function() { $("#MyTabs").tabs(); $("#MyTabs").bind('tabsshow', function(event, ui) { AddClickHandler(ui); }); }); function AddClickHandler(ui) { $('a', ui.panel).click(function() { MyAlert("AddClickHandler"); $(ui.panel).load(this.href, AddClickHandler(ui)); return false; }); } After this change all links on a panel will update the content of the current panel. So far so good. Still one problem left. I can't get it to work for subsequent links. I tried to do it with the second "AddClickHandler" for callback when the ajax call has finished. Using a different function with a simple alert showd it is actually been called when the content of the panel has been updated. But I can't bind anything to the new links in that content. The "$('a', ui.panel)..." doesn't work. What would be the correct selector for this? Any hint? Regards Maik

    Read the article

  • Pagination number elements do not work - jquery

    - by ClarkSKent
    Hello, I am trying to get my pagination links to work. It seems when I click any of the pagination number links to go the next page, the new content does not load. literally nothing happens and when looking at the console in Firebug, nothing is sent or loaded. I have on the main page 3 links to filter the content and display it. When any of these links are clicked the results are loaded and displayed along with the associated pagination numbers for that specific content. Here is the main page so you can see what the structure looks like for the jquery: <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php generate_pagination($sql) ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> The jquery below is pretty simple and I don't think I need to explain what it does. I believe the problem may be with the $("#pagination li").click(function(){ since the li elements are the numbers that do not work when clicked. Even when I try to fadeOut or hide the content on click nothing happens. I'm pretty new to the jquery structure so I do not fully understand where the real problem is occurring, this is just from my observation. The jquery file looks like this: $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = this.id; $("#content").load("pagination_data.php?page=" + pageNum, function(){ $(this).attr('data-page', pageNum); Hide_Load(); }); }); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); }); If anyone could help me on this that would be great, Thanks. EDIT: Here are the innards of <ul id="pagination">: <?php function generate_pagination($sql) { include_once('config.php'); $per_page = 3; //Calculating no of pages $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="'.$i.'">'.$i.'</li>'; } } $ids=$_GET['ids']; generate_pagination("SELECT COUNT(*) FROM explore WHERE category='$ids'"); ?>

    Read the article

  • Advanced Regex: Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URLs to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - start parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you'd like to cover even more links, but I've decided not to because I can't think of a link that would start with some obscure character) 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces were present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}" I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like my links to be replaced with shortenings as well. So when user copies a very long link (for instance if they would copy a link from google maps that usually generates long links) I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. I could as well append ellipsis at the end of at all possible (and make things even more perfect). Does Regex.Replace() method support replacement notations so that I can still use a single call? Something similar as string.Format() method does when you'd like to format values in string format (decimals, dates etc...).

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • Asynchronous Webcrawling F#, something wrong ?

    - by jlezard
    Not quite sure if it is ok to do this but, my question is: Is there something wrong with my code ? It doesn't go as fast as I would like, and since I am using lots of async workflows maybe I am doing something wrong. The goal here is to build something that can crawl 20 000 pages in less than an hour. open System open System.Text open System.Net open System.IO open System.Text.RegularExpressions open System.Collections.Generic open System.ComponentModel open Microsoft.FSharp open System.Threading //This is the Parallel.Fs file type ComparableUri ( uri: string ) = inherit System.Uri( uri ) let elts (uri:System.Uri) = uri.Scheme, uri.Host, uri.Port, uri.Segments interface System.IComparable with member this.CompareTo( uri2 ) = compare (elts this) (elts(uri2 :?> ComparableUri)) override this.Equals(uri2) = compare this (uri2 :?> ComparableUri ) = 0 override this.GetHashCode() = 0 ///////////////////////////////////////////////Funtions to retreive html string////////////////////////////// let mutable error = Set.empty<ComparableUri> let mutable visited = Set.empty<ComparableUri> let getHtmlPrimitiveAsyncDelay (delay:int) (uri : ComparableUri) = async{ try let req = (WebRequest.Create(uri)) :?> HttpWebRequest // 'use' is equivalent to ‘using’ in C# for an IDisposable req.UserAgent<-"Mozilla" //Console.WriteLine("Waiting") do! Async.Sleep(delay * 250) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response after delay "+string delay) use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> Console.WriteLine( ex.ToString() ) lock error (fun () -> error<- error.Add uri ) lock visited (fun () -> visited<-visited.Add uri ) return "BadUri" } ///////////////////////////////////////////////Active Pattern Matching to retreive href////////////////////////////// let (|Matches|_|) (pat:string) (inp:string) = let m = Regex.Matches(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Count > 0 then Some (List.tail [ for g in m -> g.Value ]) else None let (|Match|_|) (pat:string) (inp:string) = let m = Regex.Match(inp, pat) // Note the List.tl, since the first group is always the entirety of the matched string. if m.Success then Some (List.tail [ for g in m.Groups -> g.Value ]) else None ///////////////////////////////////////////////Find Bad href////////////////////////////// let isEmail (link:string) = link.Contains("@") let isMailto (link:string) = if Seq.length link >=6 then link.[0..5] = "mailto" else false let isJavascript (link:string) = if Seq.length link >=10 then link.[0..9] = "javascript" else false let isBadUri (link:string) = link="BadUri" let isEmptyHttp (link:string) = link="http://" let isFile (link:string)= if Seq.length link >=6 then link.[0..5] = "file:/" else false let containsPipe (link:string) = link.Contains("|") let isAdLink (link:string) = if Seq.length link >=6 then link.[0..5] = "adlink" elif Seq.length link >=9 then link.[0..8] = "http://adLink" else false ///////////////////////////////////////////////Find Bad href////////////////////////////// let getHref (htmlString:string) = let urlPat = "href=\"([^\"]+)" match htmlString with | Matches urlPat urls -> urls |> List.map( fun href -> match href with | Match (urlPat) (link::[]) -> link | _ -> failwith "The href was not in correct format, there was more than one match" ) | _ -> Console.WriteLine( "No links for this page" );[] |> List.filter( fun link -> not(isEmail link) ) |> List.filter( fun link -> not(isMailto link) ) |> List.filter( fun link -> not(isJavascript link) ) |> List.filter( fun link -> not(isBadUri link) ) |> List.filter( fun link -> not(isEmptyHttp link) ) |> List.filter( fun link -> not(isFile link) ) |> List.filter( fun link -> not(containsPipe link) ) |> List.filter( fun link -> not(isAdLink link) ) let treatAjax (href:System.Uri) = let link = href.ToString() let firstPart = (link.Split([|"#"|],System.StringSplitOptions.None)).[0] new Uri(firstPart) //only follow pages with certain extnsion or ones with no exensions let followHref (href:System.Uri) = let valid2 = set[".py"] let valid3 = set[".php";".htm";".asp"] let valid4 = set[".php3";".php4";".php5";".html";".aspx"] let arrLength = href.Segments |> Array.length let lastExtension = (href.Segments).[arrLength-1] let lengthLastExtension = Seq.length lastExtension if (lengthLastExtension <= 3) then not( lastExtension.Contains(".") ) else //test for the 2 case let last4 = lastExtension.[(lengthLastExtension-1)-3..(lengthLastExtension-1)] let isValid2 = valid2|>Seq.exists(fun validEnd -> last4.EndsWith( validEnd) ) if isValid2 then true else if lengthLastExtension <= 4 then not( last4.Contains(".") ) else let last5 = lastExtension.[(lengthLastExtension-1)-4..(lengthLastExtension-1)] let isValid3 = valid3|>Seq.exists(fun validEnd -> last5.EndsWith( validEnd) ) if isValid3 then true else if lengthLastExtension <= 5 then not( last5.Contains(".") ) else let last6 = lastExtension.[(lengthLastExtension-1)-5..(lengthLastExtension-1)] let isValid4 = valid4|>Seq.exists(fun validEnd -> last6.EndsWith( validEnd) ) if isValid4 then true else not( last6.Contains(".") ) && not(lastExtension.[0..5] = "mailto") //Create the correct links / -> add the homepage , make them a comparabel Uri let hrefLinksToUri ( uri:ComparableUri ) (hrefLinks:string list) = hrefLinks |> List.map( fun link -> try if Seq.length link <4 then Some(new Uri( uri, link )) else if link.[0..3] = "http" then Some(new Uri(link)) else Some(new Uri( uri, link )) with | _ as ex -> Console.WriteLine(link); lock error (fun () ->error<-error.Add uri) None ) |> List.filter( fun link -> link.IsSome ) |> List.map( fun o -> o.Value) |> List.map( fun uri -> new ComparableUri( string uri ) ) //Treat uri , removing ajax last part , and only following links specified b Benoit let linksToFollow (hrefUris:ComparableUri list) = hrefUris |>List.map( treatAjax ) |>List.filter( fun link -> followHref link ) |>List.map( fun uri -> new ComparableUri( string uri ) ) |>Set.ofList let needToVisit uri = ( lock visited (fun () -> not( visited.Contains uri) ) ) && (lock error (fun () -> not( error.Contains uri) )) let getLinksToFollowAsyncDelay (delay:int) ( uri: ComparableUri ) = async{ let! links = getHtmlPrimitiveAsyncDelay delay uri lock visited (fun () ->visited<-visited.Add uri) let linksToFollow = getHref links |> hrefLinksToUri uri |> linksToFollow |> Set.filter( needToVisit ) |> Set.map( fun link -> if uri.Authority=link.Authority then link else link ) return linksToFollow } //Add delays if visitng same authority let getDelay(uri:ComparableUri) (authorityDelay:Dictionary<string,int>) = let uriAuthority = uri.Authority let hasAuthority,delay = authorityDelay.TryGetValue(uriAuthority) if hasAuthority then authorityDelay.[uriAuthority] <-delay+1 delay else authorityDelay.Add(uriAuthority,1) 0 let rec getLinksToFollowFromSetAsync maxIteration ( uris: seq<ComparableUri> ) = let authorityDelay = Dictionary<string,int>() if maxIteration = 100 then Console.WriteLine("Finished") else //Unite by authority add delay for those we same authority others ignore let stopwatch= System.Diagnostics.Stopwatch() stopwatch.Start() let newLinks = uris |> Seq.map( fun uri -> let delay = lock authorityDelay (fun () -> getDelay uri authorityDelay ) getLinksToFollowAsyncDelay delay uri ) |> Async.Parallel |> Async.RunSynchronously |> Seq.concat stopwatch.Stop() Console.WriteLine("\n\n\n\n\n\n\nTimeElapse : "+string stopwatch.Elapsed+"\n\n\n\n\n\n\n\n\n") getLinksToFollowFromSetAsync (maxIteration+1) newLinks getLinksToFollowFromSetAsync 0 (seq[ComparableUri( "http://twitter.com/" )]) Console.WriteLine("Finished") Some feedBack would be great ! Thank you (note this is just something I am doing for fun)

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • C++ AMP Video Overview

    - by Daniel Moth
    I hope to be recording some C++ AMP screencasts for channel9 soon (you'll find them through my regular screencasts link on the left), and in all of them I will assume you have watched this short interview overview of C++ AMP.   Note: I think there were some technical problems with streaming so best to download the "High Quality WMV" or switch to progressive format. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • T-SQL Tuesday #005: Creating SSMS Custom Reports

    - by Mike C
    This is my contribution to the T-SQL Tuesday blog party, started by Adam Machanic and hosted this month by Aaron Nelson . Aaron announced this month's topic is "reporting" so I figured I'd throw a blog up on a reporting topic I've been interested in for a while -- namely creating custom reports in SSMS. Creating SSMS custom reports isn't difficult, but like most technical work it's very detailed with a lot of little steps involved. So this post is a little longer than usual and includes a lot of...(read more)

    Read the article

  • When a restore isn’t really complete

    - by John Paul Cook
    This week I discovered that restoring from a full backup doesn’t always restore SQL Server to the same state it was in when the backup was made. There are three settings that, if enabled, are not restored after a database restore. Thanks to Greg Low for pointing out that the list of affected settings is found in the SQL Server 2008 Upgrade Technical Reference Guide from which I quote: · is_broker_enabled · is_honor_broker_priority_on · is_trustworthy_on Detaching and attaching a database will also...(read more)

    Read the article

  • CRMIT Solution´s CRM++ Asterisk Telephony Connector Achieves Oracle Validated Integration with Oracle Sales Cloud

    - by Richard Lefebvre
    To achieve Oracle Validated Integration, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customers. Based on a Telephony Application Programming Interface (TAPI) framework the CRM++ Asterisk Telephony Connector integrates the Asterisk telephony solutions with Oracle® Sales Cloud. "The CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud showcases CRMIT Solutions focus and commitment to extend the Customer Experience (CX) expertise to our existing and potential customers," said Vinod Reddy, Founder & CEO, CRMIT Solutions. "Oracle® Validated Integration applies a rigorous technical review and test process," said Kevin O’Brien, senior director, ISV and SaaS Strategy, Oracle®. "Achieving Oracle® Validated Integration through Oracle® PartnerNetwork gives our customers confidence that the CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud has been validated and that the products work together as designed. This helps reduce deployment risk and improves the user experience for our joint customers." CRM++ is a suite of native Customer Experience solutions for Oracle® CRM On Demand, Oracle® Sales Cloud and Oracle® RightNow Cloud Service. With over 3000+ users the CRM++ framework helps extend the Customer Experience (CX) and the power of Customer Relations Management features including Email WorkBench, Self Service Portal, Mobile CRM, Social CRM and Computer Telephony Integration.. About CRMIT Solutions CRMIT Solutions is a pioneer in delivering SaaS-based customer experience (CX) consulting and solutions. With more than 200 certified customer relationship management (CRM) consultants and more than 175 successful CRM deployments globally, CRMIT Solutions offers a range of CRM++ applications for accelerated deployments including various rapid implementation and migration utilities for Oracle® Sales Cloud, Oracle® CRM On Demand, Oracle® Eloqua, Oracle® Social Relationship Management and Oracle® RightNow Cloud Service. About Oracle Validated Integration Oracle Validated Integration, available through the Oracle PartnerNetwork (OPN), gives customers confidence that the integration of complementary partner software products with Oracle Applications and specific Oracle Fusion Middleware solutions have been validated, and the products work together as designed. This can help customers reduce risk, improve system implementation cycles, and provide for smoother upgrades and simpler maintenance. Oracle Validated Integration applies a rigorous technical process to review partner integrations. Partners who have successfully completed the program are authorized to use the “Oracle Validated Integration” logo. For more information, please visit Oracle.com at http://www.oracle.com/us/partnerships/solutions/index.html.

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • Google Chrome Extensions: Launch Event (part 4)

    Google Chrome Extensions: Launch Event (part 4) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Aaron Boodman and Erik Kay, technical leads for the Google Chrome extensions team discuss the UI surfaces of Google Chrome extensions and the team's content not chrome philosophy. They also highlight the smooth, frictionless install and uninstall process for Google Chrome's extensions system and present the team's initiatives in the space of security and performance. From: GoogleDevelopers Views: 2963 12 ratings Time: 15:44 More in Science & Technology

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >