Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 183/866 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Amazon EC2 root defaults on EBS

    - by CodeShining
    I'm trying to understand why when launching a new instance Amazon defaults to EBS (8gb root) instead of instance storage. Why do they sell instance storage then if it's not used also to boot the base system? Is it safe to uncheck delete on termination, make it bigger (~50GiB) and keep all files on that EBS instead of creating a new one to make sure data will persist and it will also be usable by another instance?

    Read the article

  • USB Flash not recognised by Windows and BIOS, but works fine in Linux

    - by bbalegere
    I have a Transcend JetFLash 2GB USB Drive.It was working fine and I had been using it occasionally. All of sudden it stopped working in all versions of Windows . The USB Drive is also not recognised by the BIOS.It does not show in the list of bootable devices.(It used show up in the list earlier) However the USB Drive works fine in my Linux Mint 11 OS. Running dmesg gives this [ 941.812192] usb 1-2: new high speed USB device using ehci_hcd and address 4 [ 941.936178] usb 1-2: device descriptor read/64, error -71 [ 942.164188] usb 1-2: device descriptor read/64, error -71 [ 942.380189] usb 1-2: new high speed USB device using ehci_hcd and address 5 [ 942.504138] usb 1-2: device descriptor read/64, error -71 [ 942.732179] usb 1-2: device descriptor read/64, error -71 [ 942.948154] usb 1-2: new high speed USB device using ehci_hcd and address 6 [ 943.364134] usb 1-2: device not accepting address 6, error -71 [ 943.476172] usb 1-2: new high speed USB device using ehci_hcd and address 7 [ 943.892140] usb 1-2: device not accepting address 7, error -71 [ 943.892191] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 944.296190] usb 2-2: new full speed USB device using uhci_hcd and address 3 [ 944.438251] usb 2-2: not running at top speed; connect to a high speed hub [ 944.709928] usbcore: registered new interface driver uas [ 944.729999] Initializing USB Mass Storage driver... [ 944.730509] scsi6 : usb-storage 2-2:1.0 [ 944.730908] usbcore: registered new interface driver usb-storage [ 944.730917] USB Mass Storage support registered. [ 945.736320] scsi 6:0:0:0: Direct-Access JetFlash Transcend 2GB 8.07 PQ: 0 ANSI: 2 [ 945.744547] sd 6:0:0:0: Attached scsi generic sg1 type 0 [ 945.753316] sd 6:0:0:0: [sdb] 3944448 512-byte logical blocks: (2.01 GB/1.88 GiB) [ 945.758274] sd 6:0:0:0: [sdb] Write Protect is off [ 945.758288] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [ 945.765167] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.765181] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 945.784309] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.784323] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.239512] sdb: sdb1 [ 946.257279] sd 6:0:0:0: [sdb] No Caching mode page present [ 946.257292] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.257302] sd 6:0:0:0: [sdb] Attached SCSI removable disk Looks like there is something wrong the USB Drive.It is not recognised in any computer running Windows. Is there any way to fix this? Any idea why this problem occurred ?

    Read the article

  • Optiplex can't find SATA III Controller

    - by Joel Rodgers
    I just purchased a HighPoint Rocket 620 Storage controller- Serial ATA-600- 600 MBps (OEM version) and a OWC SSD: For some reason, my Dell Optiplex 755 bios sees this card as a storage device installed in the x1 PCI Express slot, but I can't get it to boot from it. In fact, I don't even see the boot screen as mentioned by the manual. Any help would be greatly appreciated. FYI, I tried every imaginable BIOS setting, including using legacy mode instead of AHCI.

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • can i find the web hosting company from an ip address ?

    - by ufk
    Hiya. i really hope this question suites serverfault, if not my apologies! I have an ip address, is there a way to find the web hosting service that this ip address belongs to ? I tried using whois and traceroute but no luck so far. the case is that my friend bought a domain and storage several years ago and he can't remember where he bought the storage from. thanks!

    Read the article

  • Freeware (preferably open-source) tool for creating multi-file spanning archives as a self merging SFX

    - by Lockszmith
    I have a large file I want to transfer using either Internet storage hosting, DVD-Rs or USB storage, which sometimes is limited to FAT file-systems (for example: mobile phones) What I'm basically looking for is a tool that create multiple files/volumes (less than 2GB each - FAT's file size limit) which are packed with a self-extracting executable. Currently the only tool I found doing this is WinRAR, but that's shareware, and not free. Is there any Free, preferably Open-Source tool that does that? Thank in advance

    Read the article

  • How to attach files to an email in Windows Phone 7.5 Mango

    - by Vaibhav Garg
    In the default email client in Windows Phone 7.5 Mango, how can arbitrary files(.zip, .mp3, .txt, .pdf etc) be attached. As the storage is sand-boxed, the file handler can implement hooks to the email client, as MS-Office does and Adobe Reader doesn't, but the email client can not access files in the Phone's storage. Is there a way, or a work around? I my usage pattern, I tend to send a lot of pdfs, and am unable to do that!

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Interview question: Develop an application that can display trail period expires after 30 days witho

    - by Algorist
    Hi, I saw this question in a forum about how an application can be developed that can keep track of the installation date and show trail period expired after 30 days of usage. The only constraint is not to use the external storage of any kind. Question: How to achieve this? Thanks Bala --Edit I think its easy to figure out the place to insert a question work. Anyway, I will write the question clearly. "external storage" means don't use any kind of storage like file, registry, network or anything. You only have your program.

    Read the article

  • Splitting assemblies - finding the balance (avoiding overkill)

    - by M.A. Hanin
    I'm writing a wide component infrastructure, to be used in my projects. Since not all projects will require every component created, I've been thinking of splitting the component into discrete assemblies, so that every application developed will only be deployed with the required assemblies. I assume that creating an assembly has some storage overhead (the assembly's code, wrapping whatever is inside). Therefore, there must be some limit to the advantage gained by splitting an assembly - a certain point where splitting the assembly is worse than keeping it united (storage-wise and performance-wise). Now, here is the question: how do I know when splitting an assembly is an overkill? P.S I guess there are other overheads to assembly splitting, aside from the storage overhead. If anyone can point out these overheads, it would be much appreciated.

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • python optparse, how to include additional info in usage output?

    - by CarpeNoctem
    Using python's optparse module I would like to add extra example lines below the regular usage output. My current help_print() output looks like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components I would like it to include usage examples for the less *nix literate users at my work. Something like this: usage: check_dell.py [options] options: -h, --help show this help message and exit -s, --storage checks virtual and physical disks -c, --chassis checks specified chassis components Examples: check_dell -c all check_dell -c fans memory voltage check_dell -s How would I accomplish this? What optparse options allow for such? Current code: import optparse def main(): parser = optparse.OptionParser() parser.add_option('-s', '--storage', action='store_true', default=False, help='checks virtual and physical disks') parser.add_option('-c', '--chassis', action='store_true', default=False, help='checks specified chassis components') (opts, args) = parser.parse_args()

    Read the article

  • Remote DocumentRoot in Apache gives a 404

    - by kshouler
    I have the following specified in my httpd.conf, but I get a 404 when attempting to connect to the server from another machine. If I set the docroot to the default htdocs directory, everything works fine. (note.. I've also tried replacing the "//storage/data1" part of the path with the network drive letter "U:") ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" DocumentRoot "//storage/data1/Engineering/Product Development" <Directory "//storage/data1/Engineering/Product Development"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • clearcase option for view movement from one host path to another

    - by wrapperm
    Hi all, I have created a clearcase dynamic view for my development by name "view1". I have mistakenly selected the view storage location as a local PC in my network, that was made sharable by the PC owner. I was suppose to select the view storage location to be a server. Now, the issue is that I have done lot of development with the view that I have created and have plenty of view DO's and view private files in it. So I'm ruling out the option of deleting the view from the PC local storage (host path) and then creating another view in the server with the same config spec. Please, let me know if there is any method of editing the view properties (or doing something else) by which I could be able to move the view to the server (with all the DO's and view private files retained) Thanks in advance, Rahamath

    Read the article

  • Why SELECT N + 1 with no foreign keys and LINQ?

    - by Daniel Flöijer
    I have a database that unfortunately have no real foreign keys (I plan to add this later, but prefer not to do it right now to make migration easier). I have manually written domain objects that map to the database to set up relationships (following this tutorial http://www.codeproject.com/Articles/43025/A-LINQ-Tutorial-Mapping-Tables-to-Objects), and I've finally gotten the code to run properly. However, I've noticed I now have the SELECT N + 1 problem. Instead of selecting all Product's they're selected one by one with this SQL: SELECT [t0].[id] AS [ProductID], [t0].[Name], [t0].[info] AS [Description] FROM [products] AS [t0] WHERE [t0].[id] = @p0 -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [65] Controller: public ViewResult List(string category, int page = 1) { var cat = categoriesRepository.Categories.SelectMany(c => c.LocalizedCategories).Where(lc => lc.CountryID == 1).First(lc => lc.Name == category).Category; var productsToShow = cat.Products; var viewModel = new ProductsListViewModel { Products = productsToShow.Skip((page - 1) * PageSize).Take(PageSize).ToList(), PagingInfo = new PagingInfo { CurrentPage = page, ItemsPerPage = PageSize, TotalItems = productsToShow.Count() }, CurrentCategory = cat }; return View("List", viewModel); } Since I wasn't sure if my LINQ expression was correct I tried to just use this but I still got N+1: var cat = categoriesRepository.Categories.First(); Domain objects: [Table(Name = "products")] public class Product { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int ProductID { get; set; } [Column] public string Name { get; set; } [Column(Name = "info")] public string Description { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "productId", ThisKey = "ProductID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Category> Categories { get { return (from pc in ProductCategories select pc.Category).ToList(); } } } [Table(Name = "products_menu")] class ProductCategory { [Column(IsPrimaryKey = true, Name = "products_id")] private int productId; private EntityRef<Product> _product = new EntityRef<Product>(); [System.Data.Linq.Mapping.Association(Storage = "_product", ThisKey = "productId")] public Product Product { get { return _product.Entity; } set { _product.Entity = value; } } [Column(IsPrimaryKey = true, Name = "products_types_id")] private int categoryId; private EntityRef<Category> _category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_category", ThisKey = "categoryId")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } [Table(Name = "products_types")] public class Category { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int CategoryID { get; set; } private EntitySet<ProductCategory> _productCategories = new EntitySet<ProductCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_productCategories", OtherKey = "categoryId", ThisKey = "CategoryID")] private ICollection<ProductCategory> ProductCategories { get { return _productCategories; } set { _productCategories.Assign(value); } } public ICollection<Product> Products { get { return (from pc in ProductCategories select pc.Product).ToList(); } } private EntitySet<LocalizedCategory> _LocalizedCategories = new EntitySet<LocalizedCategory>(); [System.Data.Linq.Mapping.Association(Storage = "_LocalizedCategories", OtherKey = "CategoryID")] public ICollection<LocalizedCategory> LocalizedCategories { get { return _LocalizedCategories; } set { _LocalizedCategories.Assign(value); } } } [Table(Name = "products_types_localized")] public class LocalizedCategory { [Column(Name = "id", IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int LocalizedCategoryID { get; set; } [Column(Name = "products_types_id")] private int CategoryID; private EntityRef<Category> _Category = new EntityRef<Category>(); [System.Data.Linq.Mapping.Association(Storage = "_Category", ThisKey = "CategoryID")] public Category Category { get { return _Category.Entity; } set { _Category.Entity = value; } } [Column(Name = "country_id")] public int CountryID { get; set; } [Column] public string Name { get; set; } } I've tried to comment out everything from my View, so nothing there seems to influence this. The ViewModel is as simple as it looks, so shouldn't be anything there. When reading this ( http://www.hookedonlinq.com/LinqToSQL5MinuteOVerview.ashx) I started suspecting it might be because I have no real foreign keys in the database and that I might need to use manual joins in my code. Is that correct? How would I go about it? Should I remove my mapping code from my domain model or is it something that I need to add/change to it? Note: I've stripped parts of the code out that I don't think is relevant to make it cleaner for this question. Please let me know if something is missing.

    Read the article

  • Can I write to different jetty databases using JPA that is using the same "entity class"

    - by Per
    I am using Java persistance and there EntityManager class and have it assigned to storage a class object that shall be written to the database. My problem is that I want to write to different databases using the same storage class. My solution to that was to write a StorageManagerfactory that has a Map holding all EntityManagers. The solution looked good until I looked at the databases and realized that all information (undepending of the Map, which gets the correct value) was written to the same database (one of the initialised in the Map). So my question is: Can I write to different databases using JPA that is using the same storage class (the class holding the structure of my database)? Thanks

    Read the article

  • Apache redirect when users home directory is completely empty.

    - by Scott M
    I work for an ISP and I have a server with thousands of users 10MB of free storage. They get this free storage with every e-mail account they have with us. An example of a users storage address: http://users.example.com/~username/ One problem I can see is scanning the server for user names to see what accounts are available, basically getting a list of all our customers valid e-mail addresses. This would be very, very bad. So I'm wanting to redirect to our homepage if someone comes across a users account that is empty (I'd say 90% of them are completely empty). I also do not want to simply -Indexes them and use a custom 403 because the few customers that do use them, want +Indexes. I know I can always just tell the customers to put a htaccess file in their directory with Options +indexes if they want directory listing, but that's a last resort. How can I make it pretty much impossible to tell what accounts are on the server but not in use at all?

    Read the article

  • DualLayout for SharePoint 2010 WCM Quick Start

    - by svdoever
    DualLayout for SharePoint 2010 WCM is a solution to provide you with complete HTML freedom in your SharePoint Server 2010 publishing pages. In this post I provide a quick start guide to get you up and running quickly so you can try it out for yourself. This quick start creates a simple HTML5 site with a page to show-case the basics and the power of DualLayout. We will create the site in its own web application. Normally there are many things you have to do to create a clean start point for your SharePoint 2010 WCM site. All those steps will be provided in later posts. For now we want to give you the minimal set of steps to take to get DualLayout working on your machine. Create an authenticated web application with hostheader cms.html5demo.local on port 80 for the cms side of the site. Click the Create Site Collection link on the Application Created dialog box and create a Site Collection based on the Publishing Portal site template. Before we can click the site link in the Top-Level Site Successfully Created dialog we need to add the new host header cms.html5demo.local to the hosts file. Add the following line to the hosts file: 127.0.0.1        cms.html5demo.local Navigate to the site at http://cms.html5demo.local to see the out-of-the-box example Adventure Works publishing site. Download and add the DualLayout solution package designfactory.duallayout.sps2010.trial.1.2.0.0.wsp to the farm’s solution store: On the Start menu, click All Programs. Click Microsoft SharePoint 2010 Products. Click SharePoint 2010 Management Shell. At the Windows PowerShell command prompt, type the following command:Add-SPSolution -LiteralPath designfactory.duallayout.sps2010.trial.1.2.0.0.wsp In SharePoint 2010 Central Administration deploy the solution to the web application http://cms.html5demo.local. Navigate to the site at http://cms.html5demo.local, and in the Site Settings screen select Site Collection Administration > Site collection features and activate the following feature: Open the site http://cms.html5demo.local in SharePoint Designer 2010. Create a view-mode masterpage html5simple.master with the following code: html5simple.master <%@ Master language="C#" %> <%@ Register Tagprefix="SharePointWebControls" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register TagPrefix="sdl" Namespace="DesignFactory.DualLayout" Assembly="DesignFactory.DualLayout, Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %>   <!DOCTYPE html> <html class="no-js">       <head>         <meta charset="utf-8" />         <meta http-equiv="X-UA-Compatible" content="IE=Edge" />         <title><SharePointWebControls:FieldValue FieldName="Title" runat="server"/></title>           <script type="text/javascript">             document.createElement('header');             document.createElement('nav');             document.createElement('article');             document.createElement('hgroup');             document.createElement('aside');             document.createElement('section');             document.createElement('footer');             document.createElement('figure');             document.createElement('time');         </script>           <asp:ContentPlaceHolder id="PlaceHolderAdditionalPageHead" runat="server"/>     </head>          <body>                  <header>             <div class="logo">Logo</div>             <h1>SiteTitle</h1>             <nav>                 <a href="#">SiteMenu 1</a>                 <a href="#">SiteMenu 2</a>                 <a href="#">SiteMenu 3</a>                 <a href="#">SiteMenu 4</a>                 <a href="#">SiteMenu 5</a>                 <sdl:SwitchToWcmModeLinkButton runat="server" Text="…"/>             </nav>             <div class="tagline">Tagline</div>             <form>                 <label>Zoek</label>                 <input type="text" placeholder="Voer een zoekterm in...">                 <button>Zoek</button>                             </form>           </header>                  <div class="content">             <div class="pageContent">                 <asp:ContentPlaceHolder id="PlaceHolderMain" runat="server" />             </div>         </div>              <footer>             <nav>                 <ul>                     <li><a href="#">FooterMenu 1</a></li>                     <li><a href="#">FooterMenu 2</a></li>                     <li><a href="#">FooterMenu 3</a></li>                     <li><a href="#">FooterMenu 4</a></li>                     <li><a href="#">FooterMenu 5</a></li>                 </ul>             </nav>             <small>Copyright &copy; 2011 Macaw</small>         </footer>     </body> </html> Note that if no specific WCM-mode master page is specified (html5simple-wcm.master), the default v4.master master page will be used in WCM-mode. Create a WCM-mode page layout html5simplePage-wcm.aspx with the following code: html5simplePage-wcm.aspx <%@ Page language="C#"     Inherits="DesignFactory.DualLayout.WcmModeLayoutPage, DesignFactory.DualLayout, Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %> <%@ Register Tagprefix="SharePointWebControls"              Namespace="Microsoft.SharePoint.WebControls"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="WebPartPages"              Namespace="Microsoft.SharePoint.WebPartPages"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingWebControls"              Namespace="Microsoft.SharePoint.Publishing.WebControls"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <asp:Content ContentPlaceholderID="PlaceHolderPageTitle" runat="server">     <SharePointWebControls:FieldValue id="PageTitle" FieldName="Title" runat="server"/> </asp:Content> <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server"> </asp:Content> Notice the Inherits at line two. Instead of inheriting from Microsoft.SharePoint.Publishing.PublishingLayoutPage we need to inherit from DesignFactory.DualLayout.WcmModeLayoutPage. Create a view-mode page layout html5simplePage.aspx with the following code: html5simplePage.aspx html5simplePage.aspx <%@ Page language="C#"          Inherits="DesignFactory.DualLayout.ViewModeLayoutPage, DesignFactory.DualLayout,                     Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %> <%@ Register Tagprefix="SharePointWebControls"              Namespace="Microsoft.SharePoint.WebControls"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="WebPartPages"              Namespace="Microsoft.SharePoint.WebPartPages"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingWebControls"              Namespace="Microsoft.SharePoint.Publishing.WebControls"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <asp:Content ContentPlaceholderID="PlaceHolderAdditionalPageHead" runat="server" /> <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server">     The title of the page is: <SharePointWebControls:FieldValue id="PageTitleInContent" FieldName="Title" runat="server"/> </asp:Content> Notice the Inherits at line two. Instead of inheriting from Microsoft.SharePoint.Publishing.PublishingLayoutPage we need to inherit from DesignFactory.DualLayout.ViewModeLayoutPage. Set the html5simple.master master page as the Site Master Page Set the allowed page layouts to the Html5 Simple Page page layout and set the New Page Default Settings also to Html5 Simple Page so new created pages are also of this page layout. Note that the Html5 Simple Page page layout is initially not selectable for New Page Default Settings. Save this configuration page first after selecting the allowed page layouts, then open again and select the default new page. Under Site Actions select the New Page action. Create a page Home.aspx of the default page layout type Html5 Simple Page. Set the new created Home.aspx page as Welcome Page. Navigate to the site http://csm.html5demo.local and see the home page in the WCM display and edit mode. Select Switch to View Mode under Site Actions to see the resulting page in view-mode. Select the three dots (…) at the right side of the menu to switch back to WCM-mode. Have a look at the source view of the resulting web page and admire the clean HTML. No SharePoint specific markup or CSS files! Clean HTML in page <!DOCTYPE html> <html class="no-js">     <head>         <meta charset="utf-8" />         <meta http-equiv="X-UA-Compatible" content="IE=Edge" />         <title>Home</title>         <script type="text/javascript">             document.createElement('header');             document.createElement('nav');             document.createElement('article');             document.createElement('hgroup');             document.createElement('aside');             document.createElement('section');             document.createElement('footer');             document.createElement('figure');             document.createElement('time');         </script>              </head>          <body>                  <header>             <div class="logo">Logo</div>             <h1>SiteTitle</h1>             <nav>                 <a href="#">SiteMenu 1</a>                 <a href="#">SiteMenu 2</a>                 <a href="#">SiteMenu 3</a>                 <a href="#">SiteMenu 4</a>                 <a href="#">SiteMenu 5</a>                 <a href="/Pages/Home.aspx?DualLayout_ShowInWcmMode=true">…</a>             </nav>             <div class="tagline">Tagline</div>             <form>                 <label>Zoek</label>                 <input type="text" placeholder="Voer een zoekterm in...">                 <button>Zoek</button>                             </form>         </header>                  <div class="content">             <div class="pageContent">                      The title of the page is: Home             </div>         </div>              <footer>             <nav>                 <ul>                     <li><a href="#">FooterMenu 1</a></li>                     <li><a href="#">FooterMenu 2</a></li>                     <li><a href="#">FooterMenu 3</a></li>                     <li><a href="#">FooterMenu 4</a></li>                     <li><a href="#">FooterMenu 5</a></li>                 </ul>             </nav>             <small>Copyright &copy; 2011 Macaw</small>         </footer>     </body> </html> <!-- Macaw DesignFactory DualLayout for SharePoint 2010 Trial version --> Note the link at line 37, this link will only be rendered for authenticated users and is our way to switch back to WCM-mode. This concludes our quick start to get DualLayout up an running in a matter of minutes. And what is the result: You can have the full SharePoint 2010 WCM publishing page editing experience to manage the content in your pages. You don’t have to delve into large SharePoint specific master pages and page layouts with a lot of knowledge of the does and don'ts with respect to SharePoint controls, scripts and stylesheets. The end-user gets a clean and light HTML page. Get your fully functional, non-timebombed trial copy of DualLayout and start creating!

    Read the article

  • 'Make' command compiling errors

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit. UPDATE: I've tried other suggestions I've found on similar error. All I have managed is a different but similar error. Here is what I get. Geoffrey@Geoffrey-Latitude-E6400:/usr/local/src/trinityrnaseq_r2013_08_14$ make Using gnu compiler for Inchworm and Chrysalis cd Inchworm && (test -e configure || autoreconf) \ && ./configure --prefix=`pwd` && make install checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking for C++ compiler default output file name... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for library containing cos... none required configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[1]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' Making install in src make[2]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' if g++ -DHAVE_CONFIG_H -I. -I. -I.. -pedantic -fopenmp -Wall -Wextra -Wno-long-long -Wno-deprecated -m64 -g -O2 -MT Fasta_entry.o -MD -MP -MF ".deps/Fasta_entry.Tpo" -c -o Fasta_entry.o Fasta_entry.cpp; \ then mv -f ".deps/Fasta_entry.Tpo" ".deps/Fasta_entry.Po"; else rm -f ".deps/Fasta_entry.Tpo"; exit 1; fi In file included from Fasta_entry.hpp:4:0, from Fasta_entry.cpp:1: /usr/include/c++/4.8/string:38:28: fatal error: bits/c++config.h: No such file or directory #include <bits/c++config.h> ^ compilation terminated. make[2]: *** [Fasta_entry.o] Error 1 make[2]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' make: *** [inchworm] Error 2

    Read the article

  • How do NTP Servers Manage to Stay so Accurate?

    - by Akemi Iwaya
    Many of us have had the occasional problem with our computers and other devices retaining accurate time settings, but a quick sync with an NTP server makes all well again. But if our own devices can lose accuracy, how do NTP servers manage to stay so accurate? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Photo courtesy of LEOL30 (Flickr). The Question SuperUser reader Frank Thornton wants to know how NTP servers are able to remain so accurate: I have noticed that on my servers and other machines, the clocks always drift so that they have to sync up to remain accurate. How do the NTP server clocks keep from drifting and always remain so accurate? How do the NTP servers manage to remain so accurate? The Answer SuperUser contributor Michael Kjorling has the answer for us: NTP servers rely on highly accurate clocks for precision timekeeping. A common time source for central NTP servers are atomic clocks, or GPS receivers (remember that GPS satellites have atomic clocks onboard). These clocks are defined as accurate since they provide a highly exact time reference. There is nothing magical about GPS or atomic clocks that make them tell you exactly what time it is. Because of how atomic clocks work, they are simply very good at, having once been told what time it is, keeping accurate time (since the second is defined in terms of atomic effects). In fact, it is worth noting that GPS time is distinct from the UTC that we are more used to seeing. These atomic clocks are in turn synchronized against International Atomic Time or TAI in order to not only accurately tell the passage of time, but also the time. Once you have an exact time on one system connected to a network like the Internet, it is a matter of protocol engineering enabling transfer of precise times between hosts over an unreliable network. In this regard a Stratum 2 (or farther from the actual time source) NTP server is no different from your desktop system syncing against a set of NTP servers. By the time you have a few accurate times (as obtained from NTP servers or elsewhere) and know the rate of advancement of your local clock (which is easy to determine), you can calculate your local clock’s drift rate relative to the “believed accurate” passage of time. Once locked in, this value can then be used to continuously adjust the local clock to make it report values very close to the accurate passage of time, even if the local real-time clock itself is highly inaccurate. As long as your local clock is not highly erratic, this should allow keeping accurate time for some time even if your upstream time source becomes unavailable for any reason. Some NTP client implementations (probably most ntpd daemon or system service implementations) do this, and others (like ntpd’s companion ntpdate which simply sets the clock once) do not. This is commonly referred to as a drift file because it persistently stores a measure of clock drift, but strictly speaking it does not have to be stored as a specific file on disk. In NTP, Stratum 0 is by definition an accurate time source. Stratum 1 is a system that uses a Stratum 0 time source as its time source (and is thus slightly less accurate than the Stratum 0 time source). Stratum 2 again is slightly less accurate than Stratum 1 because it is syncing its time against the Stratum 1 source and so on. In practice, this loss of accuracy is so small that it is completely negligible in all but the most extreme of cases. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • How can I get FreeNAS to respond to libvirt shutdown requests

    - by ptomli
    I have a KVM VM of FreeNAS 0.7.1 Shere (revision 5127) running on Ubuntu Server 10.04 and I'm unable to convince the VM to shutdown from the host virsh shutdown freenas I would expect this to send some ACPI? trigger to the VM and FreeNAS then do what it's told. I'm not a FreeBSD fundi so I don't really know what packages or processes to poke to get this running. I have tried to convince powerd to run, but the VM cpus don't have the required freq entry Sysctl HW $ sysctl hw hw.machine: amd64 hw.model: QEMU Virtual CPU version 0.12.3 hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 523116544 hw.usermem: 463806464 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: amd64 hw.realmem: 536850432 hw.aac.iosize_max: 65536 hw.amr.force_sg32: 0 hw.an.an_cache_iponly: 1 hw.an.an_cache_mcastonly: 0 hw.an.an_cache_mode: dbm hw.an.an_dump: off hw.ata.to: 15 hw.ata.wc: 1 hw.ata.atapi_dma: 1 hw.ata.ata_dma_check_80pin: 1 hw.ata.ata_dma: 1 hw.ath.txbuf: 200 hw.ath.rxbuf: 40 hw.ath.regdomain: 0 hw.ath.countrycode: 0 hw.ath.xchanmode: 1 hw.ath.outdoor: 1 hw.ath.calibrate: 30 hw.ath.hal.swba_backoff: 0 hw.ath.hal.sw_brt: 10 hw.ath.hal.dma_brt: 2 hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 hw.bge.allow_asf: 0 hw.cardbus.cis_debug: 0 hw.cardbus.debug: 0 hw.cs.recv_delay: 570 hw.cs.ignore_checksum_failure: 0 hw.cs.debug: 0 hw.cxgb.snd_queue_len: 50 hw.cxgb.use_16k_clusters: 1 hw.cxgb.force_fw_update: 0 hw.cxgb.singleq: 0 hw.cxgb.ofld_disable: 0 hw.cxgb.msi_allowed: 2 hw.cxgb.txq_mr_size: 1024 hw.cxgb.sleep_ticks: 1 hw.cxgb.tx_coalesce: 0 hw.firewire.hold_count: 3 hw.firewire.try_bmr: 1 hw.firewire.fwmem.speed: 2 hw.firewire.fwmem.eui64_lo: 0 hw.firewire.fwmem.eui64_hi: 0 hw.firewire.phydma_enable: 1 hw.firewire.nocyclemaster: 0 hw.firewire.fwe.rx_queue_len: 128 hw.firewire.fwe.tx_speed: 2 hw.firewire.fwe.stream_ch: 1 hw.firewire.fwip.rx_queue_len: 128 hw.firewire.sbp.tags: 0 hw.firewire.sbp.use_doorbell: 0 hw.firewire.sbp.scan_delay: 500 hw.firewire.sbp.login_delay: 1000 hw.firewire.sbp.exclusive_login: 1 hw.firewire.sbp.max_speed: -1 hw.firewire.sbp.auto_login: 1 hw.mfi.max_cmds: 128 hw.mfi.event_class: 0 hw.mfi.event_locale: 65535 hw.pccard.cis_debug: 0 hw.pccard.debug: 0 hw.cbb.debug: 0 hw.cbb.start_32_io: 4096 hw.cbb.start_16_io: 256 hw.cbb.start_memory: 2281701376 hw.pcic.pd6722_vsense: 1 hw.pcic.intr_mask: 57016 hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 hw.pci.do_power_resume: 1 hw.pci.do_power_nodriver: 0 hw.pci.enable_io_modes: 1 hw.pci.host_mem_start: 2147483648 hw.syscons.kbd_debug: 1 hw.syscons.kbd_reboot: 1 hw.syscons.bell: 1 hw.syscons.saver.keybonly: 1 hw.syscons.sc_no_suspend_vtswitch: 0 hw.usb.uplcom.interval: 100 hw.usb.uvscom.interval: 100 hw.usb.uvscom.opktsize: 8 hw.wi.debug: 0 hw.wi.txerate: 0 hw.xe.debug: 0 hw.intr_storm_threshold: 1000 hw.availpages: 127714 hw.bus.devctl_disable: 0 hw.ste.rxsyncs: 0 hw.busdma.total_bpages: 32 hw.busdma.zone0.total_bpages: 32 hw.busdma.zone0.free_bpages: 32 hw.busdma.zone0.reserved_bpages: 0 hw.busdma.zone0.active_bpages: 0 hw.busdma.zone0.total_bounced: 0 hw.busdma.zone0.total_deferred: 0 hw.busdma.zone0.lowaddr: 0xffffffff hw.busdma.zone0.alignment: 2 hw.busdma.zone0.boundary: 65536 hw.clockrate: 2808 hw.instruction_sse: 1 hw.apic.enable_extint: 0 hw.kbd.keymap_restrict_change: 0 hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.power_button_state: S5 hw.acpi.sleep_button_state: S3 hw.acpi.lid_switch_state: NONE hw.acpi.standby_state: S1 hw.acpi.suspend_state: S3 hw.acpi.sleep_delay: 1 hw.acpi.s4bios: 0 hw.acpi.verbose: 0 hw.acpi.disable_on_reboot: 0 hw.acpi.handle_reboot: 0 hw.acpi.cpu.cx_lowest: C1 Processes $ ps ax PID TT STAT TIME COMMAND 0 ?? DLs 0:00.00 [swapper] 1 ?? ILs 0:00.00 /sbin/init -- 2 ?? DL 0:00.08 [g_event] 3 ?? DL 0:00.29 [g_up] 4 ?? DL 0:00.33 [g_down] 5 ?? DL 0:00.00 [crypto] 6 ?? DL 0:00.00 [crypto returns] 7 ?? DL 0:00.00 [xpt_thrd] 8 ?? DL 0:00.00 [kqueue taskq] 9 ?? DL 0:00.00 [acpi_task_0] 10 ?? RL 34:12.42 [idle: cpu0] 11 ?? WL 0:01.13 [swi4: clock sio] 12 ?? WL 0:00.00 [swi3: vm] 13 ?? WL 0:00.00 [swi1: net] 14 ?? DL 0:00.04 [yarrow] 15 ?? WL 0:00.00 [swi6: task queue] 16 ?? WL 0:00.00 [swi2: cambio] 17 ?? DL 0:00.00 [acpi_task_1] 18 ?? DL 0:00.00 [acpi_task_2] 19 ?? WL 0:00.00 [swi5: +] 20 ?? DL 0:00.01 [thread taskq] 21 ?? WL 0:00.00 [swi6: Giant taskq] 22 ?? WL 0:00.00 [irq9: acpi0] 23 ?? WL 0:00.09 [irq14: ata0] 24 ?? WL 0:00.11 [irq15: ata1] 25 ?? WL 0:00.57 [irq11: ed0 uhci0] 26 ?? DL 0:00.00 [usb0] 27 ?? DL 0:00.00 [usbtask-hc] 28 ?? DL 0:00.00 [usbtask-dr] 29 ?? WL 0:00.01 [irq1: atkbd0] 30 ?? WL 0:00.00 [swi0: sio] 31 ?? DL 0:00.00 [sctp_iterator] 32 ?? DL 0:00.00 [pagedaemon] 33 ?? DL 0:00.00 [vmdaemon] 34 ?? DL 0:00.00 [idlepoll] 35 ?? DL 0:00.00 [pagezero] 36 ?? DL 0:00.01 [bufdaemon] 37 ?? DL 0:00.00 [vnlru] 38 ?? DL 0:00.14 [syncer] 39 ?? DL 0:00.01 [softdepflush] 1221 ?? Is 0:00.00 /sbin/devd 1289 ?? Is 0:00.01 /usr/sbin/syslogd -ss -f /var/etc/syslog.conf 1608 ?? Is 0:00.00 /usr/sbin/cron -s 1692 ?? Ss 0:00.03 /usr/local/sbin/mDNSResponderPosix -b -f /var/etc/mdn 1730 ?? S 0:00.43 /usr/local/sbin/lighttpd -f /var/etc/lighttpd.conf -m 1882 ?? DL 0:00.00 [system_taskq] 1883 ?? DL 0:00.00 [arc_reclaim_thread] 4139 ?? S 0:00.03 /usr/local/bin/php /usr/local/www/exec.php 4144 ?? S 0:00.00 sh -c ps ax 4145 ?? R 0:00.00 ps ax 1816 v0 Is 0:00.01 login [pam] (login) 1818 v0 I+ 0:00.03 -tcsh (csh) 1817 v1 Is+ 0:00.00 /usr/libexec/getty Pc ttyv1 1402 con- I 0:00.00 /usr/local/sbin/afpd -F /var/etc/afpd.conf 1404 con- S 0:00.00 /usr/local/sbin/cnid_metad 1682 con- I 0:02.78 /usr/local/sbin/mt-daapd -m -c /var/etc/mt-daapd.conf 1789 con- S 0:00.18 /usr/local/bin/fuppesd --config-dir /var/etc --config Libvert snippet <domain type='kvm'> <name>freenas</name> <uuid>********-****-****-****-************</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> Is this possible? Ideally I'd like to be able to stop the host without having to manually deal with shutting down the VM.

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >