Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 64/467 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • What is Site Rubix? Find the Answer Here!

    Online Marketing has really taken off lately, and that has come as no surprise. Many people who currently work "for the man" have found that man to be untrustworthy, as so many have lost their jobs through cut backs and downsizing.

    Read the article

  • Uninstalling MySQL for MariaDB Replacement on cPanel

    - by ImmortalFirefly
    Well the first part of my day was spent researching how to remove MySQL to install MariaDB and the second part of my day was spent trying to reinstall MySQL cause something was messed up. So now I come to the masses for some help. I have a box with cPanel/WHM on it. CentOS 5.6 64 bit. I have upgraded (through WHM) MySQL to 5.5.24 and that was successful. After some research, the options I found were an intimidating Linux command with pipes greps and dashes, and another command yum remove mysql I tried that out and it appeared to remove mysql.....ish. I tried installing MariaDB from this instructions page and it started to do it's thing and then came the zillions of errors (here's a small sample): Transaction Check Error: file /etc/init.d/mysql from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_convert_table_format from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_install_db from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_secure_installation from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqlbug from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqld_multi from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqld_safe from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqldumpslow from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqlhotcopy from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/innochecksum.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/my_print_defaults.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisam_ftdump.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisamchk.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisamlog.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 So it appeared that MySQL wasn't removed correctly. I've read from different tutorials given on different sites that to install MariaDB, you had to uninstall/remove MySQL and there weren't any commands given on how to do this. Does anyone know how to "safely" remove MySQL on a WHM/cPanel server so that I can install MariaDB? Here's my repo file in case anyone needs to know... # MariaDB repository list - created 2012-07-10 17:09 UTC # http://downloads.mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/5.5/centos5-x86 gpgcheck=1

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • ajax in zend frame work

    - by rookie
    Hi, I am new to Zend Frame Work. I am using $ajaxContext = $this-_helper-getHelper('AjaxContext'); for adding action contexts. I have one Index.phtml page and all other views are ajax.phtml pages. I have to do some java script methods in the ajax.phtml pages. But i didn't find a way to refer the js files in the ajax.phtml pages. I have tried adding those in the controller init and index action, using $this-view-headScript()-appendFile, though i have the reference added in the page source, none of htese seems to be working on the ajax content. Then i tried to add it in the action for the ajax page, then it is not coming in the page source itself. As far as i understood, $this-view-headScript()-appendFile will append the file reference to the layout page and for the ajax.phtml pages, the layout will be disabled. Is there any way that i can refer my js files in the ajax.phtml pages?

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • asp.net MVC Partial Views how to initialise javascript

    - by Simon G
    Hi, I have an edit form that uses an ajax form to submit to the controller. Depending on the data submitted I redirect the user to one of two pages (by returning a partial view). Both pages rely on javascript/jquery and neither use anything common between the pages. What is the best way to initialise these javascripts on each page? I know there is the AjaxOption OnComplete but both pages are quite dynamic depending on the Model passed and I would rather keep the javascript for both pages seperate rather than having a common method. Thanks

    Read the article

  • PHP Menu Question

    - by Vecta
    As one of the steps toward a greater website redesign I am putting the majority of the content of our website into html files to be used as includes. I am intending on passing a variable to the PHP template page through the URL to call the proper include. Our website has many programs that each need an index page as well as about 5 sub-pages. These program pages will need a menu system to navigate between the different pages.I am naming the pages pagex_1, pagex_2, pagex_3, etc. where "pagex" is descriptive of the page content. My question is, what would be the best way to handle this menu system? Is there a way to modify the initial variable used to arrive at the index page to create links in the menu to arrive at the other pages? Thanks for any help!

    Read the article

  • How to change page author in wordpress?

    - by GaVrA
    Ofc i know how to do that, but the thing is that i am using "User Role Editor" and i have one user group that can read and edit published pages. Now, i will be adding all the pages on that site, and we will have several users that will need to have only one page they can edit, so i need for that page to change "Page author" to that user. In case you didnt know, when user have "Edit published pages" enabled they can edit only pages where they are listed as author. Problem is i can only do that by going in phpmyadmin and changing the page_author field to the id of that user because that user group, like i said, can only read and edit published pages. That is why i can not change page author from "Edit page" page to user from that user group. So my question is: does anyone know any solution to this problem which does not involve me going to phpmyadmin and changing the id for page_author there?

    Read the article

  • Best way to manage connection strings in a project containing both Classic ASP and ASP.Net 1.1 code?

    - by JamesEggers
    I have a project that I have inherited that is primarily a Classic ASP application; however, intermixed in the the application are a handful of ASP.net pages. Some of the ASP.net pages are 1.1 and do not use a code behind model. The classic ASP pages have a number of /include directories where there's a file for database connections. The ASP.Net pages have the connection string hard coded in in their code. I'm trying to clean up this mess of connection strings so it's easier to manage across development environments. Does anyone have any recommendations on how I may be able to effectively do this that will work for both Classic ASP and ASP.Net pages? Thanks

    Read the article

  • Flex 3 give DataGrid Data to function

    - by codeworxx
    Hey Guys, i have a Problem to send a value from the DataGrid to a function- this is my function: private function browseLoc( location:String ):void { Alert.show(location,'Information'); } Now i have my DataGrid which receives Information from an XML File. Everything works fine. All Information is shown correctly with that Tags: <mx:Image x="10" y="346" width="157" height="107" scaleContent="true" source="{codeworxx.pages.page[selectedPageIndex].preview}"/> <mx:Label x="10" y="492" width="157" fontWeight="bold" text="{codeworxx.pages.page[selectedPageIndex].visible}"/> <mx:Text x="10" y="513" width="157" text="{codeworxx.pages.page[selectedPageIndex].description}"/> <mx:Button x="10" y="461" label="Visit Website" width="159" click="browseLoc('{codeworxx.pages.page[selectedPageIndex].url}')"/> except the Button. The Function "browseLoc" only has the text {codeworxx.pages.page[selectedPageIndex].url} in it - not the value. How do i do it? Hope you can help! Thanks, Sascha

    Read the article

  • Help me understand Rails eager loading

    - by aaronrussell
    I'm a little confused as to the mechanics of eager loading in active record. Lets say a Book model has many Pages and I fetch a book using this query: @book = Book.find book_id, :include => :pages Now this where I'm confused. My understanding is that @book.pages is already loaded and won't execute another query. But suppose I want to find a specific page, what would I do? @book.pages.find page_id # OR... @book.pages.to_ary.find{|p| p.id == page_id} Am I right in thinking that the first example will execute another query, and therefore making the eager loading pointless, or is active record clever enough to know that it doesn't need to do another query? Also, my second question, is there an argument that in some cases eager loading is more intensive on the database and sometimes multiple small queries will be more efficient that a single large query? Thanks for your thoughts.

    Read the article

  • Please suggest me the best way to design my database.

    - by Raymond Ho
    I have a table named "Pages" and a table named "Categories". Each entry of the table "Pages" is linked to the table "Categories". The "Categories" table have 5 entries, they are: "Car", "Websites", "Technology", "Mobile Phones", and "Interest". So each time I put an entry to the "Pages" table, I need to map it to the "Categories" table so are arranged properly. Here's my table: Pages ______ id [PK] name url Categories ______ id [PK] Categoryname Pages2Categories ______ Pages.id Categories.id So my question is, is this the most efficient way to create this kind of relationships between tables? It seems very amateur

    Read the article

  • Namespace with index action in Rails

    - by yuval
    I have an admin controller located inside /controllers/admin/admin_controller.rb I also have a pages controller located inside /controllers/admin/pages_controller.rb In my routes.rb file, I have the following: map.namespace :admin do |admin| admin.resources :pages end When the user goes to localhost:3000/admin, I'd like the user to see a page with a link to /admin/pages (Pages CRUD) and to / (To go back home). Since I am using a namespace, I cannot have an index action for /admin. How would I get this done and still have my controllers located inside my /controllers/admin folder (rather than using admin as a map.resources component and a has_many association to pages). Please note I am only interested in the show action of admin. Thank you!

    Read the article

  • How to integrate "basic" website into Zend Framework

    - by Joel
    Hi guys, I have a website that has around 10 pages. Only one of those pages uses Zend (to integrate with Google gData). Right now, it's just all coded into that one page, but I'm wanting to learn how to use Zend Framework. How do you handle basic-relatively static php pages within Zend Framework? Do you just stick the whole individual pages into their own respective views and then have to common stuff in the layout, and not worry about a model and controller for those pages? in general, is MVC accepted and appropriate technology for general "web-design" work?

    Read the article

  • In Drupal, can you control block display according to e.g. number of URL parts?

    - by james6848
    I'm having a little trouble controlling page-specific block display in Drupal... My URL's will be of this typical structure: http://www.mysite.co.uk/section-name/sub-page/sub-sub-page The 'section-name' will effectively be fixed, but there will be many sub-pages (far too many to explicitly reference). I need to somehow control block display as follows: One block will show on all pages where URL contains 'section-name/sub-page' but not on pages 'section-name/sub-page/sub-sub-page' Conversely, another block will show on all pages where URL contains 'section-name/sub-page/sub-sub-page' but not on pages 'section-name/sub-page' My only idea is to do a bit of PHP that looks for the string 'section-name' and then also counts URL parts (or even the number of slashes). Not sure how to implement that though :) Your help would be appreciated!

    Read the article

  • Python iterate object with a list of objects

    - by nerd
    First time poster, long time reader. Is it possible to iterate though an object that contains a list of objects. For example, I have the following class Class Page(object) def __init__(self, name): self.name = name self.pages = [] I then create a new Page object and add other page objects to it. page = Page('FirstPage') apagepage = Page('FirstChild') anotherpagepage = Page('SecondChild') apagepage.pages.append(Page('FirstChildChild')) apagepage.pages.append(Page('SecondChildChild')) page.pages.append(apagepage) page.pages.append(anotherpagepage) What I would like to do is for thispage in page: print thispage.name And get the following output FirstPage FirstChild SecondChild FirstChildChild SecondChildChild So I get all the 1st level, then the 2nd, then the 3rd. However, the following output would be find as well FirstPage FirstChild FirstChildChild SecondChildChild SecondChild

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • WebCenter Customer Spotlight: Ancestry.com

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAncestry.com Inc is the largest for-profit genealogy company in the world and it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Their main business challenges were to improve time to market and agility to respond quickly to fast changing Internet waves while integrating with their existing content (4 PetaByte) and legacy systems. Ancestry.com implemented Oracle WebCenter Sites as their Web Experience Management System for their landing pages and marketing micro sites, added dynamic sections to their existing websites and integrated the existing content and legacy systems through web services. The Ancestry.com landing pages and marketing sites are now managed by the business team without any involvement of engineering resources. Managed content can quickly be added to existing pages without having to refactor the whole page and existing content (4 PetaBytes)  is now served trough Oracle WebCenter Sites without having to migrate from existing systems. Company OverviewAncestry.com Inc is a publicly traded Internet company (NASDAQ: ACOM) based in Provo, Utah, USA. The largest for-profit genealogy company in the world, it operates a network of genealogical and historical record websites focused on the U.S. and nine foreign countries, develops and markets genealogical software, and offers a wide array of genealogical related services. As of June 2012, the company provided access to more than 10 billion records, 38 million family trees, and 2 million paying subscribers. Business ChallengesAncestry main business challenge was to respond quickly to fast changing Internet waves.  Product marketing could not change Web site content without going through development. They needed dedicated developers just to support their marketing efforts. Technical Requirements Support current systems and environments - ASP.NET, MVC.NET, Java, JSP, PHP Scalable and manageable for a world wide network Marketing Requirements Easy to enter content – Without having a degree in HTML Scheduling of content – When is content visible to users Product Requirements Easy to manage content – See when content is out-of-date Rotation of content – Producing new content as old content expires Solution DeployedAncestry implemented  Oracle WebCenter Sites as their Web Experience Management System to manage their landing pages and marketing micro sites. This sites are fully managed by their business team without involvement of any engineering resources. The integration with their existing Web sites is done through Spot Management which allows the ability to add dynamic content to certain sections of a web page. The dynamic content is managed by  Oracle WebCenter Sites. The integration with the existing content (4 PetaBytes!) is done trough  a custom content provider interface which allows to mix existing content with content from  Oracle WebCenter Sites. Business ResultsAncestry.com has achieved following impressive business results: Landing pages and marketing sites are now managed by the business team without any involvement of engineering resources Managed content can quickly be added to existing pages without having to refactor the whole page Provide access to existing content (4 PetaBytes)  without having to migrate from existing systems Additional Information Ancestry Webcast Oracle WebCenter Sites

    Read the article

  • Crawler do not create custom crawled properties

    - by user173739
    These days i have faced with very strange problem. I have development environment with MOSS 2007 SP 2 and WS 2008, i have search configured and everything works great. I have started to configuring staging environment (MOSS 2007 SP2 with June CU) and create new farm and new SSP. I have deployed my changes with package (wsp) and manually create site collections, sub webs, pages and so on. When fill crawl finishes, i see in Crawl log that all my pages have been successfully crawled and when i use some test tools to query search, my pages have been found. In crawl log there is few errors like http://mysite/sites/de/pages "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly..", but all pages in this Page library were indexed. The problem is that i use custom managed properties (mapped to custom crawled properties) in search queries, but crawler didn't create crawled properties for all my new site columns. For example for site column IsAccent the crawler didn't create cralwed property ows_isAccesnt. I'm sure that i have created pages for specific content type and all my crawl categories have checked "Automatically discover new properties when a crawl takes place ". In site settings - Searchable columns i haven't got any column selected as Nocrowl. I tried to export my managed and crawled properties from dev environment to stage evironment but all my managed properties were empty, after that i recreated SSP...the result was the same... I checked specific page with tools like Sharepoint Manager 2007 and U2U Caml Query Builder 2007 that content type is correct, and i can see values of my custom site collumns.... Using U2U Caml Query Builder 2007 agains some Page library in Result tab i can see ows_IsAccent (my site collumn is IsAccent) and others site columns, but i can't find them in Crawled properties. Any idias?

    Read the article

  • In wicket, combine wicket:link with IAuthorizationStrategy

    - by seanizer
    Hi everybody. I use an IAuthorizationStrategy in wicket to limit access to certain pages. However, I also use html menus like this one: <div class="siteMenu"> <wicket:link> <a href="Page1.html" class="siteMenuLink"> <wicket:message key="pages.page1.title" /> </a> <a href="Page2.html" class="siteMenuLink"> <wicket:message key="pages.page2.title" /> </a> <a href="Page3.html" class="siteMenuLink"> <wicket:message key="pages.page3.title" /> </a> </wicket:link> </div> , that are automatically picked up and expanded using the wicket:link mechanism (like here: http://wicket.apache.org/examplenavomatic.html ). However, the IAuthorizationStrategy may not allow one or more of these target pages, so I may end up either with lots of links that lead to "permission denied" pages or lots of deactivated links (i.e. em tags or the like), neither of which is pretty. I could of course write an IComponentInstantiationListener that checks all BookmarkableLinks to see whether their target is accessible through the IAuthorizationStrategy and renders them invisible otherwise, but I wonder if there is an out-of-the-box solution to this problem. For clarification: I only use the isInstantiationAuthorized() method of IAuthorizationStrategy.

    Read the article

  • How do display a UserControl with satisfying all imports

    - by Yost
    Dear SO, I'm having some issues with Silverlight 4/ MEF. I have a basic framework setup with a Silverlight Navigation app at the core. Image link to the diagram for clarification The main app (Desu) contains some pages and controls that export en import nicely. I dynamicly load controls from Desu.Controls(like an imageviewer which i identify with the IImageViewer interface) and some pages from Desu.Pages. The first problem I had was with dynamicly loading pages and being able to navigate to these pages (eg. use dummyhttp://blagh/desutestpage.aspx#/Activation when Desu.Pages was loaded from the xap). I solved this by using a custom MetaAttribute and a custom contentloader. Now for the question part: I want to load the ImageViewerControl from Desu.Controls in HomePage in Desu. I haven't loaded the Desu.Controls into the package though. When i try to load the control it gives me CompositionException because it can't satisfy the ImageViewControl import. I tried setting AllowRecomposition=true but that didn't help. So is it possible to load a control without satisfying all imports and, if yes, how does one do this?

    Read the article

  • Doxygen: grouping documentation by folder in a multi-project codebase

    - by John
    In one project, some pages were added - it was a new project in which doxygen was being tested properly - adding comments & pages - rather than simply auto-generating docs from our existing code-base. The problem is when doxygen is run on the main code-base, that project's pages show up at a top level. e.g they have a main-page and some sub-pages. But what we'd want is all those pages pushed down one level so you have main-project-main-project-pages. One question is what happens if multiple projects have a main-page? Do they get combined, or throw errors? Another question is if you can tell doxygen to use paths and containing folders to auto-generate groups or sections or page-hierarchies in some way? Going through all out projects to properly assign classes to groups is a mammoth task, so ideally everything in a directory would get put in a group of that name, as a way to make the documentation of non-doxygen codebases better. Sorry my question is a bit vague, the problem is even after reading the docs the terminology isn't totally clear yet. Hopefully the kind of question I'm asking is clear, if not I'll try to cobble an example file-structure together.

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Webpage shared with like button not showing in users timeline

    - by einar
    I have a single pages and a one page that lists all of the single pages. On the overview page you can like each single page with their url. And of corse you can do the same when you are viewing a single page. My issue is very strange. Most of the pages that I would like show up on my timeline. But then there are some that don't show after I click like, not even if I click on "Post to Facebook". This page will not show in a users timeline if liked ore "Post'ed to Facebook". http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/valdimar/ But this one will http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/snorri-helgason/ But these pages are excls the same, they use the same template so the code should not be any different, and in fact I cant see any differents between these pages that could be causing this kind of problem. You can view the overview page here http://www.inspiredbyiceland.com/inspiration/iceland-airwaves/ . Most of the single pages work fine and show up on users timeline. Most of the content on the site works fine so far as I know. There is an Facebook application defined on the page. I'm not sure if that is related to this problem.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >