Search Results

Search found 3757 results on 151 pages for 'cms migration'.

Page 57/151 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Drupal 6: creating location list manually or dynamically via cms...

    - by artmania
    Hi friends, I'm starting my first Drupal project, pretty excited :) I have a question; the project is a hotel directory site. at sidebar I have locations list (London, Manchester, Liverpool, etc..) and filter the hotels related on location click. So, how should I create these cities? Should I put them manually and give links manually depending on location id? or is there any better way to create this location list and linking filtering dynamically (via cms, or custom module, etc...) Appreciate advices!!!!

    Read the article

  • Which kind of changes can't I do with lightweight migration in Core Data?

    - by dontWatchMyProfile
    I recently tried a lot of different stuff with lightweight migration. These all work: 1) Rename attributes (with renaming identifier specified) 2) Add attributes 3) Add new entity + new attribute + inverse relationship to an already existing entity 4) remove existing entity + relationships to that entity = It almost looks like just about anything can be handled with LM. Did I miss something? In which cases am I getting into trouble and need an some more complex approach?

    Read the article

  • What is a good cms that is postgres compatible, open source and either php or python based?

    - by hackg
    Php or python Use and connect to our existing postgres databases open source / or very low license fees Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus. Already have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.

    Read the article

  • how to call functions/methods within CMS block or page?

    - by latvian
    Hi, We are trying to make all our blocks and pages static so that designer or anyone else can easily change the content or design of the website, however. There is a feature that uses our own custom module. So, the template that we want to make static is calling methods out of our custom block, for example, <!--some html code--> ..... <?php $this->helpMeBePartOfCMS(); ?> ..... <!--some html code--> How do i incorporate these method calls inside cms block or page? Thank you

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Unidata and RDB migrations to Oracle

    - by llaszews
    Have a couple of unique migrations that don't come along to often. They are Unidata and RDB migrations. The top three things that make these migration more challenging are: 1. No automated data migration tools - Because these migration don't happen that often, there are no tools in the market place to automated the data migration. 2. Application is tied to database - The application needs to be re-architected/re-engineered. Unidata Basic and COBOL for RDB. TSRI can migrate Basic to Java and PL/SQL. Transoft can migrate DEC COBOL to Java. 3. New client hardware potentially involved - Many Unidata and RDB based systems use 'green screens' as the front end. These are character based screens that will run on very old dumb terminals such as: Wyse and DEC 5250 terminals. The user interface can be replicated in a web browser but many times these old terminals do not support web browsers.

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Apache is sending php files to my browser instead of parsing

    - by justen doherty
    I have to set up PHP on an existing web host. I have made a virtual host entry, but for some reason Apache is sending the PHP to the browser instead of parsing.. from googling around it looks like it's a problem with the mimetypes, but I'm not an Apache expert by any means, so if anyone could help it would be appreciated... I have the following in my httpd.conf: AddHandler php5-script php DirectoryIndex index.html index.phtml index.php index.phps AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps The PHP module is loaded into Apache: /usr/sbin/apachectl -M Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_dbm_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_dbm_module (shared) authz_default_module (shared) ldap_module (shared) authnz_ldap_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) dav_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) dav_fs_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) proxy_module (shared) proxy_balancer_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_connect_module (shared) cache_module (shared) suexec_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) version_module (shared) fcgid_module (shared) perl_module (shared) php5_module (shared) proxy_ajp_module (shared) ssl_module (shared) And this is my virtual host entry: <VirtualHost 10.16.140.113:80> ServerName viridor-cms.co.uk ServerAlias www.viridor-cms.co.uk UseCanonicalName Off DocumentRoot /var/www/vhosts/viridor-cms.co.uk/httpdocs CustomLog /var/www/vhosts/viridor-cms.co.uk/cms-access_log common ErrorLog /var/www/vhosts/viridor-cms.co.uk/cms-error_log DirectoryIndex index.php index.html <IfModule sapi_apache2.c> php_admin_flag engine on php_admin_flag safe_mode on </IfModule> <IfModule mod_php5.c> php_admin_flag engine on php_admin_flag safe_mode on </IfModule> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps </VirtualHost> Please help, my head is so sore from banging it against the table and the wall!

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • High Linux loads on low CPU/memory usage

    - by user13323
    Hi. I have quite strange situation, where my CentOS 5.5 box loads are high, but the CPU and memory used are pretty low: top - 20:41:38 up 42 days, 6:14, 2 users, load average: 19.79, 21.25, 18.87 Tasks: 254 total, 1 running, 253 sleeping, 0 stopped, 0 zombie Cpu(s): 3.8%us, 0.3%sy, 0.1%ni, 95.0%id, 0.6%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4035284k total, 4008084k used, 27200k free, 38748k buffers Swap: 4208928k total, 242576k used, 3966352k free, 1465008k cached free -mt total used free shared buffers cached Mem: 3940 3910 29 0 37 1427 -/+ buffers/cache: 2445 1495 Swap: 4110 236 3873 Total: 8050 4147 3903 Iostat also shows good results: avg-cpu: %user %nice %system %iowait %steal %idle 3.83 0.13 0.41 0.58 0.00 95.05 Here is the ps aux output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 10348 80 ? Ss 2010 2:11 init [3] root 2 0.0 0.0 0 0 ? S< 2010 0:00 [migration/0] root 3 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/0] root 5 0.0 0.0 0 0 ? S< 2010 0:02 [migration/1] root 6 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/1] root 7 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/1] root 8 0.0 0.0 0 0 ? S< 2010 0:02 [migration/2] root 9 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/2] root 10 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/2] root 11 0.0 0.0 0 0 ? S< 2010 0:02 [migration/3] root 12 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/3] root 13 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/3] root 14 0.0 0.0 0 0 ? S< 2010 0:03 [migration/4] root 15 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/4] root 16 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/4] root 17 0.0 0.0 0 0 ? S< 2010 0:01 [migration/5] root 18 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/5] root 19 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/5] root 20 0.0 0.0 0 0 ? S< 2010 0:11 [migration/6] root 21 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/6] root 22 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/6] root 23 0.0 0.0 0 0 ? S< 2010 0:01 [migration/7] root 24 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/7] root 25 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/7] root 26 0.0 0.0 0 0 ? S< 2010 0:00 [migration/8] root 27 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/8] root 28 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/8] root 29 0.0 0.0 0 0 ? S< 2010 0:00 [migration/9] root 30 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/9] root 31 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/9] root 32 0.0 0.0 0 0 ? S< 2010 0:08 [migration/10] root 33 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/10] root 34 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/10] root 35 0.0 0.0 0 0 ? S< 2010 0:05 [migration/11] root 36 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/11] root 37 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/11] root 38 0.0 0.0 0 0 ? S< 2010 0:02 [migration/12] root 39 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/12] root 40 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/12] root 41 0.0 0.0 0 0 ? S< 2010 0:14 [migration/13] root 42 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/13] root 43 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/13] root 44 0.0 0.0 0 0 ? S< 2010 0:04 [migration/14] root 45 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/14] root 46 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/14] root 47 0.0 0.0 0 0 ? S< 2010 0:01 [migration/15] root 48 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/15] root 49 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/15] root 50 0.0 0.0 0 0 ? S< 2010 0:00 [events/0] root 51 0.0 0.0 0 0 ? S< 2010 0:00 [events/1] root 52 0.0 0.0 0 0 ? S< 2010 0:00 [events/2] root 53 0.0 0.0 0 0 ? S< 2010 0:00 [events/3] root 54 0.0 0.0 0 0 ? S< 2010 0:00 [events/4] root 55 0.0 0.0 0 0 ? S< 2010 0:00 [events/5] root 56 0.0 0.0 0 0 ? S< 2010 0:00 [events/6] root 57 0.0 0.0 0 0 ? S< 2010 0:00 [events/7] root 58 0.0 0.0 0 0 ? S< 2010 0:00 [events/8] root 59 0.0 0.0 0 0 ? S< 2010 0:00 [events/9] root 60 0.0 0.0 0 0 ? S< 2010 0:00 [events/10] root 61 0.0 0.0 0 0 ? S< 2010 0:00 [events/11] root 62 0.0 0.0 0 0 ? S< 2010 0:00 [events/12] root 63 0.0 0.0 0 0 ? S< 2010 0:00 [events/13] root 64 0.0 0.0 0 0 ? S< 2010 0:00 [events/14] root 65 0.0 0.0 0 0 ? S< 2010 0:00 [events/15] root 66 0.0 0.0 0 0 ? S< 2010 0:00 [khelper] root 107 0.0 0.0 0 0 ? S< 2010 0:00 [kthread] root 126 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/0] root 127 0.0 0.0 0 0 ? S< 2010 0:03 [kblockd/1] root 128 0.0 0.0 0 0 ? S< 2010 0:01 [kblockd/2] root 129 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/3] root 130 0.0 0.0 0 0 ? S< 2010 0:05 [kblockd/4] root 131 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/5] root 132 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/6] root 133 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/7] root 134 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/8] root 135 0.0 0.0 0 0 ? S< 2010 0:02 [kblockd/9] root 136 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/10] root 137 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/11] root 138 0.0 0.0 0 0 ? S< 2010 0:04 [kblockd/12] root 139 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/13] root 140 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/14] root 141 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/15] root 142 0.0 0.0 0 0 ? S< 2010 0:00 [kacpid] root 281 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/0] root 282 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/1] root 283 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/2] root 284 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/3] root 285 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/4] root 286 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/5] root 287 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/6] root 288 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/7] root 289 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/8] root 290 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/9] root 291 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/10] root 292 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/11] root 293 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/12] root 294 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/13] root 295 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/14] root 296 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/15] root 299 0.0 0.0 0 0 ? S< 2010 0:00 [khubd] root 301 0.0 0.0 0 0 ? S< 2010 0:00 [kseriod] root 490 0.0 0.0 0 0 ? S 2010 0:00 [khungtaskd] root 493 0.1 0.0 0 0 ? S< 2010 94:48 [kswapd1] root 494 0.0 0.0 0 0 ? S< 2010 0:00 [aio/0] root 495 0.0 0.0 0 0 ? S< 2010 0:00 [aio/1] root 496 0.0 0.0 0 0 ? S< 2010 0:00 [aio/2] root 497 0.0 0.0 0 0 ? S< 2010 0:00 [aio/3] root 498 0.0 0.0 0 0 ? S< 2010 0:00 [aio/4] root 499 0.0 0.0 0 0 ? S< 2010 0:00 [aio/5] root 500 0.0 0.0 0 0 ? S< 2010 0:00 [aio/6] root 501 0.0 0.0 0 0 ? S< 2010 0:00 [aio/7] root 502 0.0 0.0 0 0 ? S< 2010 0:00 [aio/8] root 503 0.0 0.0 0 0 ? S< 2010 0:00 [aio/9] root 504 0.0 0.0 0 0 ? S< 2010 0:00 [aio/10] root 505 0.0 0.0 0 0 ? S< 2010 0:00 [aio/11] root 506 0.0 0.0 0 0 ? S< 2010 0:00 [aio/12] root 507 0.0 0.0 0 0 ? S< 2010 0:00 [aio/13] root 508 0.0 0.0 0 0 ? S< 2010 0:00 [aio/14] root 509 0.0 0.0 0 0 ? S< 2010 0:00 [aio/15] root 665 0.0 0.0 0 0 ? S< 2010 0:00 [kpsmoused] root 808 0.0 0.0 0 0 ? S< 2010 0:00 [ata/0] root 809 0.0 0.0 0 0 ? S< 2010 0:00 [ata/1] root 810 0.0 0.0 0 0 ? S< 2010 0:00 [ata/2] root 811 0.0 0.0 0 0 ? S< 2010 0:00 [ata/3] root 812 0.0 0.0 0 0 ? S< 2010 0:00 [ata/4] root 813 0.0 0.0 0 0 ? S< 2010 0:00 [ata/5] root 814 0.0 0.0 0 0 ? S< 2010 0:00 [ata/6] root 815 0.0 0.0 0 0 ? S< 2010 0:00 [ata/7] root 816 0.0 0.0 0 0 ? S< 2010 0:00 [ata/8] root 817 0.0 0.0 0 0 ? S< 2010 0:00 [ata/9] root 818 0.0 0.0 0 0 ? S< 2010 0:00 [ata/10] root 819 0.0 0.0 0 0 ? S< 2010 0:00 [ata/11] root 820 0.0 0.0 0 0 ? S< 2010 0:00 [ata/12] root 821 0.0 0.0 0 0 ? S< 2010 0:00 [ata/13] root 822 0.0 0.0 0 0 ? S< 2010 0:00 [ata/14] root 823 0.0 0.0 0 0 ? S< 2010 0:00 [ata/15] root 824 0.0 0.0 0 0 ? S< 2010 0:00 [ata_aux] root 842 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_0] root 843 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_1] root 844 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_2] root 845 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_3] root 846 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_4] root 847 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_5] root 882 0.0 0.0 0 0 ? S< 2010 0:00 [kstriped] root 951 0.0 0.0 0 0 ? S< 2010 4:24 [kjournald] root 976 0.0 0.0 0 0 ? S< 2010 0:00 [kauditd] postfix 990 0.0 0.0 54208 2284 ? S 21:19 0:00 pickup -l -t fifo -u root 1013 0.0 0.0 12676 8 ? S<s 2010 0:00 /sbin/udevd -d root 1326 0.0 0.0 90900 3400 ? Ss 14:53 0:00 sshd: root@notty root 1410 0.0 0.0 53972 2108 ? Ss 14:53 0:00 /usr/libexec/openssh/sftp-server root 2690 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/0] root 2691 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/1] root 2692 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/2] root 2693 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/3] root 2694 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/4] root 2695 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/5] root 2696 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/6] root 2697 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/7] root 2698 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/8] root 2699 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/9] root 2700 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/10] root 2701 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/11] root 2702 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/12] root 2703 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/13] root 2704 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/14] root 2705 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/15] root 2706 0.0 0.0 0 0 ? S< 2010 0:00 [kmpath_handlerd] root 2755 0.0 0.0 0 0 ? S< 2010 4:35 [kjournald] root 2757 0.0 0.0 0 0 ? S< 2010 3:38 [kjournald] root 2759 0.0 0.0 0 0 ? S< 2010 4:10 [kjournald] root 2761 0.0 0.0 0 0 ? S< 2010 4:26 [kjournald] root 2763 0.0 0.0 0 0 ? S< 2010 3:15 [kjournald] root 2765 0.0 0.0 0 0 ? S< 2010 3:04 [kjournald] root 2767 0.0 0.0 0 0 ? S< 2010 3:02 [kjournald] root 2769 0.0 0.0 0 0 ? S< 2010 2:58 [kjournald] root 2771 0.0 0.0 0 0 ? S< 2010 0:00 [kjournald] root 3340 0.0 0.0 5908 356 ? Ss 2010 2:48 syslogd -m 0 root 3343 0.0 0.0 3804 212 ? Ss 2010 0:03 klogd -x root 3430 0.0 0.0 0 0 ? S< 2010 0:50 [kondemand/0] root 3431 0.0 0.0 0 0 ? S< 2010 0:54 [kondemand/1] root 3432 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/2] root 3433 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/3] root 3434 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/4] root 3435 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/5] root 3436 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/6] root 3437 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/7] root 3438 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/8] root 3439 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/9] root 3440 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/10] root 3441 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/11] root 3442 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/12] root 3443 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/13] root 3444 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/14] root 3445 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/15] root 3461 0.0 0.0 10760 284 ? Ss 2010 3:44 irqbalance rpc 3481 0.0 0.0 8052 4 ? Ss 2010 0:00 portmap root 3526 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/0] root 3527 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/1] root 3528 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/2] root 3529 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/3] root 3530 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/4] root 3531 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/5] root 3532 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/6] root 3533 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/7] root 3534 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/8] root 3535 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/9] root 3536 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/10] root 3537 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/11] root 3538 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/12] root 3539 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/13] root 3540 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/14] root 3541 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/15] root 3563 0.0 0.0 10160 8 ? Ss 2010 0:00 rpc.statd root 3595 0.0 0.0 55180 4 ? Ss 2010 0:00 rpc.idmapd dbus 3618 0.0 0.0 21256 28 ? Ss 2010 0:00 dbus-daemon --system root 3649 0.2 0.4 563084 18796 ? S<sl 2010 179:03 mfsmount /mnt/mfs -o rw,mfsmaster=web1.ovs.local root 3702 0.0 0.0 3800 8 ? Ss 2010 0:00 /usr/sbin/acpid 68 3715 0.0 0.0 31312 816 ? Ss 2010 3:14 hald root 3716 0.0 0.0 21692 28 ? S 2010 0:00 hald-runner 68 3726 0.0 0.0 12324 8 ? S 2010 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 68 3730 0.0 0.0 12324 8 ? S 2010 0:00 hald-addon-keyboard: listening on /dev/input/event0 root 3773 0.0 0.0 62608 332 ? Ss 2010 0:00 /usr/sbin/sshd ganglia 3786 0.0 0.0 24704 988 ? Ss 2010 14:26 /usr/sbin/gmond root 3843 0.0 0.0 54144 300 ? Ss 2010 1:49 /usr/libexec/postfix/master postfix 3855 0.0 0.0 54860 1060 ? S 2010 0:22 qmgr -l -t fifo -u root 3877 0.0 0.0 74828 708 ? Ss 2010 1:15 crond root 3891 1.4 1.9 326960 77704 ? S<l 2010 896:59 mfschunkserver root 4122 0.0 0.0 18732 176 ? Ss 2010 0:10 /usr/sbin/atd root 4193 0.0 0.8 129180 35984 ? Ssl 2010 11:04 /usr/bin/ruby /usr/sbin/puppetd root 4223 0.0 0.0 18416 172 ? S 2010 0:10 /usr/sbin/smartd -q never root 4227 0.0 0.0 3792 8 tty1 Ss+ 2010 0:00 /sbin/mingetty tty1 root 4230 0.0 0.0 3792 8 tty2 Ss+ 2010 0:00 /sbin/mingetty tty2 root 4231 0.0 0.0 3792 8 tty3 Ss+ 2010 0:00 /sbin/mingetty tty3 root 4233 0.0 0.0 3792 8 tty4 Ss+ 2010 0:00 /sbin/mingetty tty4 root 4234 0.0 0.0 3792 8 tty5 Ss+ 2010 0:00 /sbin/mingetty tty5 root 4236 0.0 0.0 3792 8 tty6 Ss+ 2010 0:00 /sbin/mingetty tty6 root 5596 0.0 0.0 19368 20 ? Ss 2010 0:00 DarwinStreamingServer qtss 5597 0.8 0.9 166572 37408 ? Sl 2010 523:02 DarwinStreamingServer root 8714 0.0 0.0 0 0 ? S Jan31 0:33 [pdflush] root 9914 0.0 0.0 65612 968 pts/1 R+ 21:49 0:00 ps aux root 10765 0.0 0.0 76792 1080 ? Ss Jan24 0:58 SCREEN root 10766 0.0 0.0 66212 872 pts/3 Ss Jan24 0:00 /bin/bash root 11833 0.0 0.0 63852 1060 pts/3 S+ 17:17 0:00 /bin/sh ./launch.sh root 11834 437 42.9 4126884 1733348 pts/3 Sl+ 17:17 1190:50 /usr/bin/java -Xms128m -Xmx512m -XX:+UseConcMarkSweepGC -jar /JavaCore/JavaCore.jar root 13127 4.7 1.1 110564 46876 ? Ssl 17:18 12:55 /JavaCore/fetcher.bin root 19392 0.0 0.0 90108 3336 ? Rs 20:35 0:00 sshd: root@pts/1 root 19401 0.0 0.0 66216 1640 pts/1 Ss 20:35 0:00 -bash root 20567 0.0 0.0 90108 412 ? Ss Jan16 1:58 sshd: root@pts/0 root 20569 0.0 0.0 66084 912 pts/0 Ss Jan16 0:00 -bash root 21053 0.0 0.0 63856 28 ? S Jan30 0:00 /bin/sh /usr/bin/WowzaMediaServerd /usr/local/WowzaMediaServer/bin/setenv.sh /var/run/WowzaM root 21054 2.9 10.3 2252652 418468 ? Sl Jan30 314:25 java -Xmx1200M -server -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true - root 21915 0.0 0.0 0 0 ? S Feb01 0:00 [pdflush] root 29996 0.0 0.0 76524 1004 pts/0 S+ 14:41 0:00 screen -x Any idea what could this be, or where I should look for more diagnostic information? Thanks.

    Read the article

  • Apache conf for high trafic CMS with backend users?

    - by Annan
    I'm in the situation where a website is going to have a high number of web users and a few backend webmasters. Webmasters will upload images (+other high mem tasks) and this bumps up the memory allocation of the httpd child processes to 100-150mb. In order to stop swapping I'm currently setting MaxClients in httpd.conf to 20. However this lowers maximum simultaneous requests. Will this be a problem when the website goes live? What is the best configuration? Info: Drupal 6, PHP 5, Apache 2.2 (Prefork atm) I'm thinking about Worker MPM, two apache instances or low MaxRequestsPerChild.

    Read the article

  • What is "rfcTextOfMessage" value? : Google Apps Email Migration API Developer's Guide

    - by Pari
    I am using Google API to test below code: MailItemService mailItemService = new MailItemService(domain, "Sample Migration Application"); mailItemService.setUserCredentials(userEmail, password); MailItemEntry entry = new MailItemEntry(); entry.Rfc822Msg = new Rfc822MsgElement(rfcTextOfMessage); Referring to this Link . I used Sample Value given for "rfcTextOfMessage". But how to change To,Send and Date values for different mails? Is there any way to get this format? Note: I am using C#

    Read the article

  • How do I use a Rails ActiveRecord migration to insert a primary key into a MySQL database?

    - by Terry Lorber
    I need to create an AR migration for a table of image files. The images are being checked into the source tree, and should act like attachment_fu files. That being the case, I'm creating a hierarchy for them under /public/system. Because of the way attachment_fu generates links, I need to use the directory naming convention to insert primary key values. How do I override the auto-increment in MySQL as well as any Rails magic so that I can do something like this: image = Image.create(:id => 42, :filename => "foo.jpg") image.id #=> 42

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • How do I use .htaccess RewriteRule to change underscores to dashes

    - by soopadoubled
    I'm working on a site, and its CMS used to save new page urls using the underscore character as a word seperator. Despite the fact that Google now treats underscore as a word seperator, the SEO powers that be are demanding the site use dashes instead. This is very easy to do within the CMS, and I can of course change all existing URLs saved in the MySQL database that serves the CMS. My problem lies in writing a .htaccess rule that will 301 old style underscore seperated links to the new style hyphenated verstion. I had success using the answers to this Stack Overflow question on other sites, using: RewriteRule ^([^_]*)_([^_]*_.*) $1-$2 [N] RewriteRule ^([^_]*)_([^_]*)$ /$1-$2 [L,R=301] However this CMS site uses a lot of existing rules to produce clean URLs, and I can't get this working in conjunction with the existing rule set. .htaccess currently looks like this: Options FollowSymLinks # RewriteOptions MaxRedirects=50 RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\.mydomain\.co\.uk$ [NC] RewriteRule (.*) http://www.mydomain.co.uk/$1 [R=301,L] #trailing slash enforcement RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !# RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.mydomain.co.uk/$1/ [L,R=301] RewriteRule ^test/([0-9]+)(/)?$ test_htaccess.php?year=$1 [nc] RewriteRule ^index(/)?$ index.php RewriteRule ^department/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1&$2=$3 [nc] RewriteRule ^department/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1 [nc] RewriteRule ^product/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1&$2=$3 [nc] RewriteRule ^product/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1 [nc] RewriteRule ^content/([^/]*)(/)?$ ecom/index.php?action=ecom.cdetails&mode=$1 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3&$4=$5 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3 [nc] RewriteRule ([^/]*)/action/([^/]*)(/)?$ $1/index.php?action=$2 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2&$3=$4 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2 [nc] RewriteRule ^action/([^/]*)/([^/]*)(/)?$ index.php?action=$1&mode=$2 [nc] RewriteRule ^sid/([^/]*)(/)?$ index.php?sid=$1 [nc] ## Error Handling ## #RewriteRule ^error/([^/]*)(/)?$ index.php?action=error&mode=$1 [nc] # ----------------------------------- Content Section ------------------------------ # #RewriteRule ^([^/]*)(/)?$ index.php?action=cms&mode=$1 [nc] RewriteRule ^accessibility(/)?$ index.php?action=cms&mode=accessibility RewriteRule ^terms(/)?$ index.php?action=cms&mode=conditions RewriteRule ^privacy(/)?$ index.php?action=cms&mode=privacy RewriteRule ^memberpoints(/)?$ index.php?action=cms&mode=member_points RewriteRule ^contactus(/)?$ index.php?action=contactus RewriteRule ^sitemap(/)?$ index.php?action=sitemap ErrorDocument 404 /index.php?action=error&mode=content ExpiresDefault "access plus 3 days" All page URLS are in one of the 3 following formats: http://www.mydomain.com/department/some_page_address/ http://www.mydomain.com/product/some_page_address/ http://www.mydomain.com/content/some_page_address/ I'm sure I am missing something obvious, but at this level my regex and mod_rewrite skills clearly aren't up to par. Any ideas would be greatly appreciated!

    Read the article

  • Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • RedHat 5.5 server does not show per processor memory utilization

    - by Mike S
    I have been searching all over internet but not finding any leads. I have a system with a memory leak that I am trying to troubleshoot. Unfortunately I am not able to see per processor memory utilization. Here are the outputs of TOP and PS commands. Linux SERVER_NAME 2.6.18-194.8.1.el5 #1 SMP Wed Jun 23 10:52:51 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux top - 09:17:13 up 18:43, 3 users, load average: 0.00, 0.00, 0.00 Tasks: 375 total, 1 running, 373 sleeping, 0 stopped, 1 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32922828k total, 32776712k used, 146116k free, 267128k buffers Swap: 5245212k total, 0k used, 5245212k free, 32141044k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 10348 744 620 S 0.0 0.0 0:05.65 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.05 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 % ps -auxf | sort -nr -k 4 | head -10 Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ xfs 6205 0.0 0.0 23316 3892 ? Ss Aug19 0:00 xfs -droppriv -daemon uuidd 6101 0.0 0.0 60976 224 ? Ss Aug19 0:00 /usr/sbin/uuidd USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND smmsp 6130 0.0 0.0 57900 1784 ? Ss Aug19 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue rpc 5126 0.0 0.0 8052 632 ? Ss Aug19 0:00 portmap root 99 0.0 0.0 0 0 ? S< Aug19 0:00 [events/1] root 98 0.0 0.0 0 0 ? S< Aug19 0:00 [events/0] root 97 0.0 0.0 0 0 ? S< Aug19 0:00 [watchdog/31] root 96 0.0 0.0 0 0 ? SN Aug19 0:00 [ksoftirqd/31] root 95 0.0 0.0 0 0 ? S< Aug19 0:00 [migration/31] Any help with this is appretiate.

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Crashes in Core Data's Inferred Mapping Model Creation (Lightweight Migration). Threading Issue?

    - by enchilada
    I'm getting random crashes when creating an inferred mapping model (with Core Data's lightweight migration) within my application. By the way, I have to do it programmatically in my application while it is running. This is how I create this model (after I have made proper currentModel and newModel objects, of course): NSMappingModel *mappingModel = [NSMappingModel inferredMappingModelForSourceModel:currentModel destinationModel:newModel error:&error]; The problem is this: This method is crashing randomly. When it works, it works just fine without issues. But when it crashes, it crashes my application (instead of returning nil to signify that the method failed, as it should). By randomly, I mean that sometimes it happens and sometimes not. It is unpredictable. Now, here is the deal: I'm running this method in another thread. More precisely, it is located inside a block that is passed via GCD to run on the global main queue. I need to do this for my UI to appear crisp to the user, i.e. so that I can display a progress indicator while the work is underway. The strange thing seems to be that if I remove the GCD stuff and just let it run on the main thread, it seems to be working fine and never crashing. Thus, could it be because I'm running this on a different thread that this is crashing? I somehow find that weird because I don't believe I'm breaking any Core Data rules regarding multi-threading. In particular, I'm not passing any managed objects around, and whenever I need access to the MOC, I create a new MOC, i.e. I'm not relying on any MOC (or for that matter: anything) that has been created earlier on the main thread. Besides the little MOC stuff that occurs, occurs after the mapping model creation method, i.e. after the point at which the app crashes, so it can't possibly be a cause of the crashes under consideration here. All I'm doing is taking two MOMs and asking for a mapping model between them. That can't be wrong even under threading, now can it? Any ideas on what could be going on?

    Read the article

  • Git Project Dependencies on GitHub

    - by VirtuosiMedia
    I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.

    Read the article

  • Python platform

    - by LazyTiberius
    I'm looking for a python platform or environment. I'm looking for something like or similar to easyphp ou xampp for try and learn some cms. i've find mezzanine cms http://mezzanine.jupo.org/ and skeletonz http://orangoo.com/skeletonz/ usually i use and know apache environment. But python is new for me. i'm a noob with this 2 cms (mezzanine and skeletonz). My configuration os is windows 7 and windows 8 i need something easy to simulate a python environment hosting Thank all for your help

    Read the article

  • vmdk Recovery after migration from 3.5 to 4 and fallback tentative.

    - by olgirard
    Hy, I've tryed to migrate some VM from my 3.5i environment to a brand new vSphere 4.0 U1. The two platforms are running simultaneously, sharing the same SAN. I Migrate my VM by stopping it, unregistering in vcenter (esx ver. 3.5, i call it esx3), register in vSphere (esx ver. 4, i call it esx4), and migrate upgrade virtual hardware before powering it up (First mistake). vMotion was enabled on esx4, seem to be a second mistake. After a day or so, i encountred problems joigning the esx server (esx4) and decided to unregister my server for esx4 and fallback to esx3. esx3 refused to boot, i supposed this was due to virtual hardware in Version 7 so i recreated a new VM pointing to the vmdk of the old VM. Everithing seemed fine until i log into the server and discover that i was running on the original disk ith every snapshots ignored even those created on esx3. I tried to reboot VM on esx4 but VM doesn't power up because "The parent virtual disk has been modified since the child was created". I've got a copy of a later state of the drive but generated between two snapshots (ovf generated with canverter standalone) as a backup. Do i have a chance to recover at least some files on the virtual drive or (as i tink) all is played, i've done enought mistakes for this time. Thanks for your help.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >