Search Results

Search found 18174 results on 727 pages for 'eclipse plugin dev'.

Page 274/727 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Delphi App has "No Debug Info" when Debugging

    - by James L.
    We have built an application that uses packages and components. When we debug the application, the "Event Log" in the IDE often shows the our BPLs are being loaded without debug information ("No Debug Info"). This doesn't make sense because all our packages and EXEs are built with debug. _(each project) | Options | Compiling_ [ x ] Assertions [ x ] Debug information [ x ] Local symbols Symbol reference info = "Reference info" [ ] Use debug .dcus [ x ] Use imported data references _(each project) | Options | Linking_ [ x ] Debug information Map file = Detailed We have 4 projects, all built with runtime pacakges: Core.bpl Components.bpl Plugin.bpl (uses both #1 & #2) MainApp.exe (uses #1) Problems Observed 1) Many times when we debug, the Components.bpl is loaded with debug info, but all values in the "Local Variables" window are blank. If you hover your mouse over a variable in the code, there is no popup, and Evaluate window also shows nothing (the "Result" pane is always blank). 2) Sometimes the Event Log shows "No Debug Info" for various BPLs. For instance, if we activate the Plugin.bpl project and set it's Run | Parameter's Host Application to be the MainApp.exe, and then press F9, all modules seems to load with "Has Debug Info" except for the Plugin.bpl module. When it loads, the Event Log shows "No Debug Info". However, if we close the app and immediately press F9, it will run it again without recompiling anything and this time Plugin.bpl is loaded with debug ("Has Debug Info"). Questions 1) What would cause the "Local Variables" window to not display the values? 2) Why are BPLs sometimes loaded without debug info when the BPL was complied with debug and all the debug files (dcu, map, etc.) are available?

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • jQuery .trigger() or $(window) not working in Google Chrome

    - by Jonathan
    I have this jQuery ajax navigation tabs plugin that I created using some help from CSS-Tricks.com and the jQuery hashchange event plugin (detects hash changes in browsers that don't support it). The code is little long to post it here but it goes something like this: Part 1) When a tab is clicked, it gets the href attribute of that tab and add it to the browsers navigation bar like '#tab_name': window.location.hash = $(this).attr("href"); Part 2) When the navigation bar changes (hash change), it gets the href change like this: window.location.hash.substring(1); (substring is to get only 'tab_name' without the '#'), and then call the ajax function to get the info to display. I want to automatically trigger the plugin to load the first tab when the page is accessed, so at the start of the code I put: if (window.location.hash === '') { // If no '#' is in the browser navigation bar window.location.hash = '#tab_1'; // Add #tab_1 to the navigation bar $(window).trigger('hashchange'); // Trigger a hashchange so 'Part 2' of the plugin calls the ajax function using the '#tab_1' added } The probles is that it works in FF but not in Chrome, I mean, everything works but it seems like the $(window).trigger('hashchange'); is not working because it doesnt get the first tab.. Any suggestions??

    Read the article

  • Using JRE 1.5, still maven says annotation not supported in -source 1.3

    - by Abhijeet
    Hi, I am using JRE 1.5. Still when I try to compile my code it fails by saying to use JRE 1.5 instead of 1.3 C:\temp\SpringExamplemvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building SpringExample [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] Deleting directory C:\temp\SpringExample\target [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 6 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 6 source files to C:\temp\SpringExample\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.BuildFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:715) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure C:\temp\SpringExample\src\main\java\com\mkyong\stock\model\Stock.java:[45,9] annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) @Override at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:516) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:114) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2 seconds [INFO] Finished at: Wed Dec 22 10:04:53 IST 2010 [INFO] Final Memory: 9M/16M [INFO] ------------------------------------------------------------------------ C:\temp\SpringExamplejavac -version javac 1.5.0_08 javac: no source files

    Read the article

  • Checkstyle not working

    - by user330060
    Hi, I am new to maven and chekstyle, so need to ask some question... I want to use checkstyle in my maven based project, so in my pom.xml I have add the dependency <dependency> <groupId>checkstyle</groupId> <artifactId>checkstyle</artifactId> <version>2.4</version> </dependency> and also I have added the entry in plugin tag: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>2.4</version> <configuration> <enableRulesSummary>true</enableRulesSummary> <configLocation>checkstyle.xml</configLocation> </configuration> </plugin> But when I run my maven build with command mvn clean install, checkstyle doesn't do anything. And as I dont have any checkstyle.xml in my system yet, shouldn't it complains me about the error? What else configuration am I missing?

    Read the article

  • Jquery grid overlay in wordpress

    - by Anders Kitson
    I am adding this simple plugin that I have working in a static html site, and am trying to add it to a wordpress development site based off of 960 gs. The jquery code links are correct but the console gives me this error "Uncaught TypeError: Cannot call method 'addGrid' of null" I got the code from this turtorial http://www.badlydrawntoy.com/2009/04/21/960gs-grid-overlay-a-jquery-plugin/ Here is the code I am using /*<![CDATA[*/ // onload $(function() { $("body").addGrid(12, {img_path: 'img/'}); }); /*]]>*/ Here is the code for the plugin /* * @ description: Plugin to display 960.gs gridlines See http://960.gs/ * @author: badlyDrawnToy sharp / http://www.badlydrawntoy.com * @license: Creative Commons License - ShareAlike http://creativecommons.org/licenses/by-sa/3.0/ * @version: 1.0 20th April 2009 */ (function($){$.fn.addGrid=function(cols,options){var defaults={default_cols:12,z_index:999,img_path:'/images/',opacity:.6};var opts=$.extend(defaults,options);var cols=cols!=null&&(cols===12||cols===16)?cols:12;var cols=cols===opts.default_cols?'12_col':'16_col';return this.each(function(){var $el=$(this);var height=$el.height();var wrapper=$('<div id="'+opts.grid_id+'"/>').appendTo($el).css({'display':'none','position':'absolute','top':0,'z-index':(opts.z_index-1),'height':height,'opacity':opts.opacity,'width':'100%'});$('<div/>').addClass('container_12').css({'margin':'0 auto','width':'960px','height':height,'background-image':'url('+opts.img_path+cols+'.png)','background-repeat':'repeat-y'}).appendTo(wrapper);$('<div>grid on</div>').appendTo($el).css({'position':'absolute','top':0,'left':0,'z-index':opts.z_index,'background':'#222','color':'#fff','padding':'3px 6px','width':'40px','text-align':'center'}).hover(function(){$(this).css("cursor","pointer");},function(){$(this).css("cursor","default");}).toggle(function(){$(this).text("grid off");$('#'+opts.grid_id).slideDown();},function(){$(this).text("grid on");$('#'+opts.grid_id).slideUp();});});};})(jQuery);

    Read the article

  • How to extract part of the path and the ending file name with Regex?

    - by brasofilo
    I need to build an associative array with the plugin name and the language file it uses in the following sequence: /whatever/path/length/public_html/wp-content/plugins/adminimize/languages/adminimize-en_US.mo /whatever/path/length/public_html/wp-content/plugins/audio-tube/lang/atp-en_US.mo /whatever/path/length/public_html/wp-content/languages/en_US.mo /whatever/path/length/public_html/wp-content/themes/twentyeleven/languages/en_US.mo Those are the language files WordPress is loading. They are all inside /wp-content/, but with variable server paths. I'm looking only for those inside the plugins folder, grab the plugin folder name and the filename. Hipothetical case in PHP, where reg_extract_* functions are the parts I'm missing: $plugins = array(); foreach( $big_array as $item ) { $folder = reg_extract_folder( $item ); if( 'plugin' == $folder ) { // "folder-name-after-plugins-folder" $plugin_name = reg_extract_pname( $item ); // "ending-mo-file.mo" $file_name = reg_extract_fname( $item ); $plugins[] = array( 'name' => $plugin_name, 'file' => $file_name ); } } [update] Ok, so I was missing quite a basic function, pathinfo... :/ No problem to detect if /plugins/ is contained in the array. But what about the plugin folder name?

    Read the article

  • How to remove erroneous dependency from tycho build?

    - by sfinnie
    Context: Have built an eclipse update site using tycho but trying to install into target IDE fails. The update site builds fine; I can see it from a target eclipse installation and select the feature for installation. However, the dependency check fails at start of install as it can't find a declared dependency (org.eclipselabs.xtext.utils.unittesting). This shouldn't be a dependency: it was erroneously included in MANIFEST.MF for one of my eclipse plugin projects. I removed the dependency from the manifest and run mvn clean install. Build reported success. However when I try to use the newly built update site it still complains that the dependency to org.eclipselabs.xtext.utils.unittesting (a) exists and (b) can't be satisfied. So the question is: what else do I need to do to remove the dependency from the generated update site? Thanks for any pointers. PS: I know I could add the site for o.e.x.u.unittesting in the target eclipse installation so it can satisfy the dependency. However I don't want to do that; it's not needed for the feature to work and I don't want other users to have to add an unnecessary dependency.

    Read the article

  • jQuery: bind generated elements

    - by superUntitled
    Hello, thank you for taking time to look at this. I am trying to code my very first jQuery plugin and have run into my first problem. The plugin is called like this <div id="kneel"></div> <script type="text/javascript"> $("#kneel").zod(1, { }); </script> It takes the first option (integer), and returns html content that is dynamically generated by php. The content that is generated needs to be bound by a variety of functions (such as disabling form buttons and click events that return ajax data. The plugin looks like this (I have included the whole plugin in case that matters)... (function( $ ){ $.fn.zod = function(id, options ) { var settings = { 'next_button_text': 'Next', 'submit_button_text': 'Submit' }; return this.each(function() { if ( options ) { $.extend( settings, options ); } // variables var obj = $(this); /* these functions contain html elements that are generated by the get() function below */ // disable some buttons $('div.bario').children('.button').attr('disabled', true); // once an option is selected, enable the button $('input[type="radio"]').live('click', function(e) { $('div.bario').children('.button').attr('disabled', false); }) // when a button is clicked, return some data $('.button').bind('click', function(e) { e.preventDefault(); $.getJSON('/returnSomeData.php, function(data) { $('.text').html('<p>Hello: ' + data + '</p>'); }); // generate content, this is the content that needs binding... $.get('http://example.com/script.php?id='+id, function(data) { $(obj).html(data); }); }); }; })( jQuery ); The problem I am having is that the functions created to run on generated content are not binding to the generated content. How do I bind the content created by the get() function?

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • .NET: Calling GetInterface method of Assembly obj with a generic interface argument

    - by Khnle
    I have the following interface: public interface PluginInterface<T> where T : MyData { List<T> GetTableData(); } In a separate assembly, I have a class that implements this interface. In fact, all classes that implement this interface are in separate assemblies. The reason is to architect my app as a plugin host, where plugin can be done in the future as long as they implement the above interface and the assembly DLLs are copied to the appropriate folder. My app discovers the plugins by first loading the assembly and performs the following: List<PluginInterface<MyData>> Plugins = new List<PluginInterface<MyData>>(); string FileName = ...;//name of the DLL file that contains classes that implement the interface Assembly Asm = Assembly.LoadFile(Filename); foreach (Type AsmType in Asm.GetTypes()) { //Type type = AsmType.GetInterface("PluginInterface", true); // Type type = AsmType.GetInterface("PluginInterface<T>", true); if (type != null) { PluginInterface<MyData> Plugin = (PluginInterface<MyData>)Activator.CreateInstance(AsmType); Plugins.Add(Plugin); } } The trouble is because neither line where I am getting the type as by doing Type type = ... seems to work, as both seems to be null. I have the feeling that the generic somehow contributes to the trouble. Do you know why?

    Read the article

  • Bind an Incode DataTemplate in WPF

    - by Mike Bynum
    I have a WPF Application which is using MVVM. I know that there ways of doing this in XAML but I am working on a plugin architecture and came up with a solution where a plugin exposes it's viewmodel to my plugin host's viewmodel and it's datatemplate. I want to leave the lifetime management of the plugin view up to WPF. I have tried having the plugins expose a UserControl but ran into issues when WPF decided to dispose of my UserControl so I would not reattach it without weird hacky work arounds. I am having issues getting some sort of binding working to where i can bind a control to the data and it's template to my data template. I have a ViewModel which looks something like: public class MyViewModel { public DataTemplate SelectedTemplate{ get; set;} public object SelectedViewModel {get; set;} } The selected template and viewmodel are determined somewhere else in the code but are irrelevant to my question. My question is how i can bind to a DataTemplate so that I know how to display the data shown in the SelectedViewModel. The DataTemplate is a DataTemplate created incode which respresents: <DataTemplate DataType="{x:Type vm:MyViewModel}"> <v:MyUserControl /> </DataTemplate> I have tried: <UserControl Template="{Binding Path=SelectedTemplate}" Content="{Binding Path=SelectedViewModel"} /> But UserControl expects a control template and not a data template.

    Read the article

  • JS file called twice in Wordpress

    - by Dxr Tw
    I'm trying to reduce number of requests by reducing number of JS files on WP site. I successfully combined around 7 javascript files into one (site.js). Now, I'm using a plugin which has its own JS file (pluginA.js), I want to include (pluginA.js) in that site.js. However, if I simply copy pluginA JS content to site.js and then change location to /files/site.js, firebug's NET tab shows that site.js is requested/called twice. I presume this is due to wp_enqueue_script. How can I make it not call site.js second time but just look into already loaded site.js? Maybe there's alternative to wp_enqueue_script? Plugin's php file: add_action( 'wp_enqueue_scripts', 'testplugin_scripts'); function testplugin_scripts() { /*global $testplugin_version; */ $default_selector = 'li:has(ul) > a'; $default_selector_leaf = 'li li li:not(:has(ul)) > a'; wp_enqueue_scripts('test-plugin', site_url('/files/site.js', __FILE__), array('jquery'), $testplugin_version); $params = array( 'selector' => apply_filters('testplugin_selector', $default_selector), 'selector_leaf' => apply_filters('testplugin_selector_leaf', $default_selector_leaf) ); wp_localize_script('test-plugin', 'testplugin_params', $params); }

    Read the article

  • How to determine which element(s) are visible in an overflowed <div>

    - by jjross
    Basically, I'm trying to implement a system that behaves similar to the reading pane that's built into the Google Reader interface. If you haven't seen it, Google Reader presents each article in a separate box and as you scroll it highlights the current box (and marks the article as read). In addition to this, you can move forward or backward in the article list by clicking the previous and next buttons in the UI. I've basically figured out how to do most of the functionality. However, I'm not sure how I can determine which of my divs is currently visible in in the scrollable pane. I have a div that is set to overflow:auto. Inside of this div, there are other divs, each one containing a piece of content. I've used the following jquery plugin to make everything scroll based on a click of the "next" or "previous" button and it works like a charm: http://demos.flesler.com/jquery/serialScroll/ But I can't tell which div has "focus" in the scrollable pane. I'd like to be able to do this for two reasons. I'd like to highlight the item that the user is currently reading (similar to Google Reader). I need to do this regardless of whether or not they used the plugin to get there or used the browser's scroll bar. I need to be able to tell the plugin which item has focus so that my call to scroll to the "next" pane actually uses the currently viewed pane (and not just the previous pane that the plugin scrolled from). I've tried doing some searching but I can't seem to figure out a way to do this. I found lots of ways to scroll to a particular item, but I can't find a way to determine which element is visible in an overflowed div. If I can determine which items are visible, I can (probably) figure out the rest. I'm using jquery if that helps. Thanks!

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • PPTP connection disconnect

    - by Vladimir Franciz S. Blando
    My pptp connection wont stay connected, it will disconnect in less than a minute here are some relevant log entries May 31 13:32:31 localhost NetworkManager[931]: <info> Starting VPN service 'pptp'... May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 15216 May 31 13:32:31 localhost NetworkManager[931]: <info> VPN service 'pptp' appeared; activating connections May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: init (1) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN plugin state changed: starting (3) May 31 13:32:31 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (Connect) reply received. May 31 13:32:31 localhost pppd[15221]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. May 31 13:32:31 localhost pppd[15221]: pppd 2.4.5 started by root, uid 0 May 31 13:32:31 localhost pptp[15224]: nm-pptp-service-15216 log[main:pptp.c:314]: The synchronous pptp option is NOT activated May 31 13:32:31 localhost pppd[15221]: Using interface ppp0 May 31 13:32:31 localhost pppd[15221]: Connect: ppp0 <--> /dev/pts/5 May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) May 31 13:32:31 localhost NetworkManager[931]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply May 31 13:32:32 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. May 31 13:32:33 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. May 31 13:32:34 localhost pptp[15235]: nm-pptp-service-15216 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1536). May 31 13:32:37 localhost pppd[15221]: CHAP authentication succeeded May 31 13:32:37 localhost kernel: [54007.078553] PPP MPPE Compression module registered May 31 13:32:40 localhost pppd[15221]: MPPE 128-bit stateless compression enabled May 31 13:32:42 localhost pppd[15221]: local IP address 10.100.0.52 May 31 13:32:42 localhost pppd[15221]: remote IP address 10.100.0.1 May 31 13:32:42 localhost pppd[15221]: primary DNS address 4.2.2.1 May 31 13:32:42 localhost pppd[15221]: secondary DNS address 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) reply received. May 31 13:32:42 localhost NetworkManager[931]: <info> VPN Gateway: 103.28.219.2 May 31 13:32:42 localhost NetworkManager[931]: <info> Tunnel Device: ppp0 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Address: 10.100.0.52 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Prefix: 32 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 Point-to-Point Address: 10.100.0.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Maximum Segment Size (MSS): 0 May 31 13:32:42 localhost NetworkManager[931]: <info> Forbid Default Route: no May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 4.2.2.1 May 31 13:32:42 localhost NetworkManager[931]: <info> Internal IP4 DNS: 255.255.255.255 May 31 13:32:42 localhost NetworkManager[931]: <info> DNS Domain: '(none)' May 31 13:32:43 localhost dnsmasq[2127]: exiting on receipt of SIGTERM May 31 13:32:43 localhost NetworkManager[931]: <info> DNS: starting dnsmasq... May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dnsmasq[15290]: error at line 2 of /var/run/nm-dns-dnsmasq.conf May 31 13:32:43 localhost dnsmasq[15290]: FAILED to start up May 31 13:32:43 localhost NetworkManager[931]: <info> VPN connection 'Dynalabs' (IP Config Get) complete. May 31 13:32:43 localhost NetworkManager[931]: <info> Policy set 'Dynalabs' (ppp0) as default for IPv4 routing and DNS. May 31 13:32:43 localhost NetworkManager[931]: <info> VPN plugin state changed: started (4) May 31 13:32:43 localhost NetworkManager[931]: <warn> dnsmasq exited with error: Configuration problem (1) May 31 13:32:43 localhost NetworkManager[931]: <info> (ppp0): writing resolv.conf to /sbin/resolvconf May 31 13:32:43 localhost dbus[872]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) May 31 13:32:43 localhost dbus[872]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' May 31 13:33:00 localhost ntpdate[15370]: step time server 91.189.94.4 offset -1.110301 sec May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd6d6 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x93aa May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xcc83 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2031 May 31 13:33:21 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x13d4 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x5b11 May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x414b May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f5f May 31 13:33:22 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe9ff May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8e20 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8f0 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf166 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x36e6 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xdd19 May 31 13:33:23 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xda26 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xac5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x53a5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x507e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1dc5 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf87b May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2f27 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd10c May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66ef May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa294 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb15 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x52a2 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd863 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8a96 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xde19 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9763 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xb23 May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x83ca May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x964e May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe8ae May 31 13:33:24 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf614 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x9b1 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf086 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbff4 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x66c5 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xe42 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf295 May 31 13:33:25 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x86fe May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3bc1 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xbaad May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x88b5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xd7a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x30d5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x2d8f May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x3933 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x8d42 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x4b4 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xa205 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x7cc5 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x1b6a May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0xf004 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x21b6 May 31 13:33:26 localhost pppd[15221]: Protocol-Reject for unsupported protocol 0x51eb

    Read the article

  • Using Open MQ as an Oracle CEP Event Source

    - by seth.white
    I helped an Oracle CEP customer recently who wanted to use Open MQ has an event source for their Oracle CEP application.  In this case, the Oracle CEP application was being used to provide monitoring for an electronic commerce website, however, the steps for configuring Open MQ are entirely independent of the application logic. I thought I would list the configuration steps in a blog post in case they might help others in the future. Note that although the Oracle CEP documentation states that only WebLogic and Tibco JMS are "officially" supported, any JMS implementation that provides a Java client should work with Oracle CEP. The first step is to add an adapter to the application's EPN. This can be done in the usual way, using the Eclipse IDE. The end result is something like the following bit of configuration in the application's Spring application context. Note that the provider attribute value of 'jms-inbound' specifies that the out-of-the-box JMS adapter is being used. <wlevs:adapter id="helloworldAdapter" provider="jms-inbound"> </wlevs:adapter>   Next, configure the inbound adapter so that it can connect to Open MQ in the Oracle CEP configuration file (config.xml). The snippet below provides an example of what this configuration should look like. The exact values specified for jndi-provider-url, jndi-factory, connection-jndi-name, destination-jndi-name elements will depend on your Open MQ configuration.  For example , if the name of your Open MQ topic destination is 'ElectronicCommerceTopic', then you would specify that as the destination-jndi-name.  The name of your Open MQ connection factory goes in the connection-jndi-name element. In my simple example, I also specify in event-type element so that the out-of-the-box JMS adapter will attempt to automatically convert incoming messages to events of type HelloWorldEvent. In a more complex application, one would configure a custom converter on the JMS adapter to convert from messages to events.  The Oracle CEP 11.1.3 documentation describes how to do this.   <jms-adapter> <name>helloworldAdapter</name> <event-type>HelloWorldEvent</event-type> <jndi-provider-url>file:///C:/Temp</jndi-provider-url> <jndi-factory>com.sun.jndi.fscontext.RefFSContextFactory</jndi-factory> <connection-jndi-name>YourJMSConnectionFactoryName</connection-jndi-name> <destination-jndi-name>YourJMSDestinationName</destination-jndi-name> </jms-adapter>   Finally, one needs to package the client-side Open MQ jars so that the classes that they contain are available to the Oracle CEP runtime. The recommended way for doing this in the Oracle CEP 11.1.3 release is to package the classes as a library module or simply place them in the application bundle.  The advantage of deploying the classes as a library module is that they are available to any application that wants to connect to Open MQ. In my case, I packaged the classes in my application bundle. A best practice when you want to include additional jars in your application bundle is to create a 'lib' directory in your Eclipse project and then copy the required jars into that directory.  Then, use the support that Eclipse provides to add the jars to the bundle classpath (which makes the classes part of your application in the same way that regular application classes are), and export all of the classes from your application bundle so that they are available to the Oracle CEP server runtime.  The screenshot below Illustrates how this is done in Eclipse.  The bundle classpath contains two Open MQ jars and all packages in the jars are exported.     Finally, import the javax.jms and javax.naming packages into the application module as these are needed by the Open MQ classes. The screenshot below shows the complete list of package imports for my sample application.       Once you have completed these steps, you should be able to build and deploy your application and begin receiving inbound messages from Open MQ. Technorati Tags: CEP,JMS,Adapter,Open MQ,Eclipse .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 }

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • CodePlex Daily Summary for Saturday, November 17, 2012

    CodePlex Daily Summary for Saturday, November 17, 2012Popular ReleasesPaint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...Water Entity for SunBurn: Sunburn Water Entity For 2.0.1.8 (Deffered Only): Sunburn water entity for Sunburn 2.0.1.8 for deffered rendering only, forward water is not working yet. You need to download water normal maps, from Sunburn Reflection/Refraction example from Here.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...xUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...MVC Bootstrap: MVC Boostrap 0.5.6: A small demo site, based on the default ASP.NET MVC 3 project template, showing off some of the features of MVC Bootstrap. Added features to the Membership provider, a "lock out" feature to help fight brute force hacking of accounts. After a set number of log in attempts, the account is locked for a set time. If you download and use this project, please give some feedback, good or bad!Home Access Plus+: v8.4: This release only contains fixes for the 97576 release, you can download the v8.3 release files which aren't in this release from 97576 Changes: Fixed: Setup.aspx wrong jquery reference Fixed: Issue with loading the user's photo Changed: The JSON Urls to use a number of a date rather than a string Added: Code to hopefully, finally, fix the AD Browser not working some times File Changes: ~/bin/hap.ad.dll ~/bin/hap.web.dll ~/bin/hap.web.configuration.dll ~/bin/hap.web.livetiles.dl...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...????: ???? 1.0: ????Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office ??? ?????、Unicode IVS?????????????????Unicode IVS???????????????。??、??????????????、?????????????????????????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.74: fix for issue #18836 - sometimes throws null-reference errors in ActivationObject.AnalyzeScope method. add back the Context object's 8-parameter constructor, since someone has code that's using it. throw a low-pri warning if an expression statement is == or ===; warn that the developer may have meant an assignment (=). if window.XXXX or window"XXXX" is encountered, add XXXX (as long as it's a valid JavaScript identifier) to the known globals so subsequent references to XXXX won't throw ...???????: Monitor 2012-11-11: This is the first releasehttpclient?????????: httpclient??????? 1.0: httpclient??????? (1)?????????? (2)????????? (3)??2012-11-06??,???????。VidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...New Projects40Fingers Page Language module for DotNetNuke: DotNetNuke module that allows you to overrule the language a specific page, without using any other localization solutions. Brought to you by 40FINGERSAnonymous Interfaces (by Stephen Cleary): A fluent API for dynamically creating an anonymous type implementing a specific interface.Asynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.BizTalkWCFOperationPromoteEncoder: If you seen the BizTalk error: " An action mapping was defined but BTS.Operation was not found in the Message context ", When you use the BizTalk WCF-Customer, bLaugh Plugin for Windows Live Writer: The bLaugh Plugin for Windows Live Writer provides a quick and easy way to find and insert bLaugh (www.blaugh.com) comics into a blog post.CaptureWebPage: Code Between The Lines @ http://www.codebetweenthelines.com Capture an Entire Web Page in a C# Console Application Sample application that fixes most problems.CF-Soft: this is my workspace to study C# programming.Cloud Backup Scheduler: Desktop utility to auto schedule online backups with popular cloud services. Cloud Giraffe: Cloud Giraffe is a library which provides easy collection and analysis of Windows Azure Diagnostics data.dbscript: dbscript is a simple scripting language for DBAs, devs, and others who have need to routinely deliver the results of SQL queries in Excel spreadsheet form.Drill: Drill is the Dependency Resolution and Instance Location Layer, an abstraction to DI containers, service locators and the like facilitating loose coupling.Drzewa decyzyjne WPF: Praca inzE-Commerce Gado: E commerce voltado para o mercado bovinoELearningTutorial: A list of tutorialsElmas Kitap: Kitapliginizi düzenleyebileceginiz bir program.Epi Info iOS Companion: iPad and iPhone companion for Epi Info 7.FreeBlokus: A simple version of Blokus Duo.g1: test GGCStatus: Global Gift Card by We Solve IT. Server Status ApplicationGiveGraph: Social Managed Resources Distribution SystemGlobal String Formatter: The Global String Formatter library allows developers to deal with conditional string formatting in an elegant fashion. Developers specify a predicate and a corresponding string output function for each case of the formatting. The library plays well with DI frameworks.HTWKAidStation: The HTWKAidStation shows schedules and the menu of local mensae in Leipzig, Germany.JsViewEngine - a ASP.NET View Engine using JsRender on both client and server: An ASP.NET View Engine that specializes in sharing partial views between client and server.Kexp.ShoutcastRunner: ShoutcastRunner is a quick and simple Windows service that runs the SHOUTcast DNAS 1.x mp3 server and provides some extra features. MDrive - Controlling Precision Electric Motors: This project demonstrates how to control MDrive(r) precision electric motors from Schneider Electric.MyNet Project: MyNet is an undergraduate project developed in the context of the Cloud Data Management cours at Grenoble INP.Passwords Thief: Sometimes you need to see what is behind asteriks in password edit box. It will help you to resolve this problem. Pet Shop Web: Projeto para gerenciamento de Pet Shop Desenvolvido em ASP.NET com Framework 4.0primeiros passos: aRCVersion: A smart tool to modify version information in RC file.recycling20: Recycling 2.0replaceSID: replaceSID replaces an SID in a SDDL fileshmapcha: Cool toolSMC (Social Media Connector): Social Media APIs are developed to provide Join-In Game providers to access Social Media Portal. Textline Processor: Textline processor is a utility to execute C# codes dynamically on each line of texts, and get output to replace lines in the text. treadmill project: Treadmill ProjectTriathlon Checklist: Triathlon Checklist is a Windows Phone application for triathletes. Download it for free: http://bit.ly/TriathlonChecklistwarhamer40kListBuilder: projet personnel de geston de liste d'armée pour le jeux warhammer 40.000.Wonder: WonderWP7GBAEmulator: A C# GBA Emulator For Windows Phone 7WPF Message Box: WPF Message Box is a simple and free message box for WPF using MVVM pattern. Image, buttons, message, and caption can be set.????: ??????·???????????????

    Read the article

  • Sharepoint Error: [COMException (0x80004005): Cannot complete this action.

    - by ifunky
    Hi, I've created a site basic definition that uses different master and default pages. Everything works quite well except for whenever I create a new site based on the definition I receive the following error when browsing to the new site: [COMException (0x80004005): Cannot complete this action. Please try again.] Please try again.] Microsoft.SharePoint.Library.SPRequestInternalClass.GetFileAndMetaInfo(String bstrUrl, Byte bPageView, Byte bPageMode, Byte bGetBuildDependencySet, String bstrCurrentFolderUrl, Boolean& pbCanCustomizePages, Boolean& pbCanPersonalizeWebParts, Boolean& pbCanAddDeleteWebParts, Boolean& pbGhostedDocument, Boolean& pbDefaultToPersonal, String& pbstrSiteRoot, Guid& pgSiteId, UInt32& pdwVersion, String& pbstrTimeLastModified, String& pbstrContent, Byte& pVerGhostedSetupPath, UInt32& pdwPartCount, Object& pvarMetaData, Object& pvarMultipleMeetingDoclibRootFolders, String& pbstrRedirectUrl, Boolean& pbObjectIsList, Guid& pgListId, UInt32& pdwItemId, Int64& pllListFlags, Boolean& pbAccessDenied, Guid& pgDocId, Byte& piLevel, UInt64& ppermMask, Object& pvarBuildDependencySet, UInt32& pdwNumBuildDependencies, Object& pvarBuildDependencies, String& pbstrFolderUrl, String& pbstrContentTypeOrder) +0 Microsoft.SharePoint.Library.SPRequest.GetFileAndMetaInfo(String bstrUrl, Byte bPageView, Byte bPageMode, Byte bGetBuildDependencySet, String bstrCurrentFolderUrl, Boolean& pbCanCustomizePages, Boolean& pbCanPersonalizeWebParts, Boolean& pbCanAddDeleteWebParts, Boolean& pbGhostedDocument, Boolean& pbDefaultToPersonal, String& pbstrSiteRoot, Guid& pgSiteId, UInt32& pdwVersion, String& pbstrTimeLastModified, String& pbstrContent, Byte& pVerGhostedSetupPath, UInt32& pdwPartCount, Object& pvarMetaData, Object& pvarMultipleMeetingDoclibRootFolders, String& pbstrRedirectUrl, Boolean& pbObjectIsList, Guid& pgListId, UInt32& pdwItemId, Int64& pllListFlags, Boolean& pbAccessDenied, Guid& pgDocId, Byte& piLevel, UInt64& ppermMask, Object& pvarBuildDependencySet, UInt32& pdwNumBuildDependencies, Object& pvarBuildDependencies, String& pbstrFolderUrl, String& pbstrContentTypeOrder) +219 [SPException: Cannot complete this action. Please try again.] I'm able to work around this by checking out the new master page and checking it back in again and after doing so there are no further issues at all. Any ideas to what could cause this? Thanks Dan ONET.XML module section: <Modules> <Module Name="CustomMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> <File Url="Shoes.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" /> </Module> <Module Name="Default" List="116" Url=""> <File Url="default.aspx" Name="default.aspx" NavBarHome="True" IgnoreIfAlreadyExists="FALSE"> <AllUsersWebPart WebPartZoneID="Left" WebPartOrder="1"> &lt;webParts&gt;&lt;webPart xmlns="http://schemas.microsoft.com/WebPart/v3"&gt;&lt;metaData&gt;&lt;type name="BCM.SharePoint.Shoes.ShoesComponents.FooterLinks, BCM.SharePoint.Shoes.ShoesComponents, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2881713f39360b71" /&gt;&lt;importErrorMessage&gt;Cannot import this Web Part.&lt;/importErrorMessage&gt;&lt;/metaData&gt;&lt;data&gt;&lt;properties&gt;&lt;property name="AllowClose" type="bool"&gt;True&lt;/property&gt;&lt;property name="Width" type="string" /&gt;&lt;property name="MyProperty" type="string"&gt;Hello SharePoint&lt;/property&gt;&lt;property name="AllowMinimize" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowConnect" type="bool"&gt;True&lt;/property&gt;&lt;property name="ChromeType" type="chrometype"&gt;None&lt;/property&gt;&lt;property name="TitleIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_FooterLinks.gif&lt;/property&gt;&lt;property name="Description" type="string"&gt;FooterLinks Description&lt;/property&gt;&lt;property name="Hidden" type="bool"&gt;False&lt;/property&gt;&lt;property name="TitleUrl" type="string" /&gt;&lt;property name="AllowEdit" type="bool"&gt;True&lt;/property&gt;&lt;property name="Height" type="string" /&gt;&lt;property name="MissingAssembly" type="string"&gt;Cannot import this Web Part.&lt;/property&gt;&lt;property name="HelpUrl" type="string" /&gt;&lt;property name="Title" type="string" /&gt;&lt;property name="CatalogIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_FooterLinks.gif&lt;/property&gt;&lt;property name="Direction" type="direction"&gt;NotSet&lt;/property&gt;&lt;property name="ChromeState" type="chromestate"&gt;Normal&lt;/property&gt;&lt;property name="AllowZoneChange" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowHide" type="bool"&gt;True&lt;/property&gt;&lt;property name="HelpMode" type="helpmode"&gt;Modeless&lt;/property&gt;&lt;property name="ExportMode" type="exportmode"&gt;All&lt;/property&gt;&lt;/properties&gt;&lt;/data&gt;&lt;/webPart&gt;&lt;/webParts&gt; </AllUsersWebPart> <AllUsersWebPart WebPartZoneID="Left" WebPartOrder="0"> &lt;webParts&gt;&lt;webPart xmlns="http://schemas.microsoft.com/WebPart/v3"&gt;&lt;metaData&gt;&lt;type name="BCM.SharePoint.Shoes.ShoesComponents.SubFooterLinks, BCM.SharePoint.Shoes.ShoesComponents, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2881713f39360b71" /&gt;&lt;importErrorMessage&gt;Cannot import this Web Part.&lt;/importErrorMessage&gt;&lt;/metaData&gt;&lt;data&gt;&lt;properties&gt;&lt;property name="AllowClose" type="bool"&gt;True&lt;/property&gt;&lt;property name="Width" type="string" /&gt;&lt;property name="MyProperty" type="string"&gt;Hello SharePoint&lt;/property&gt;&lt;property name="AllowMinimize" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowConnect" type="bool"&gt;True&lt;/property&gt;&lt;property name="ChromeType" type="chrometype"&gt;None&lt;/property&gt;&lt;property name="TitleIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_SubFooterLinks.gif&lt;/property&gt;&lt;property name="Description" type="string"&gt;Shoes home page links (under the hero image)&lt;/property&gt;&lt;property name="Hidden" type="bool"&gt;False&lt;/property&gt;&lt;property name="TitleUrl" type="string" /&gt;&lt;property name="AllowEdit" type="bool"&gt;True&lt;/property&gt;&lt;property name="Height" type="string" /&gt;&lt;property name="MissingAssembly" type="string"&gt;Cannot import this Web Part.&lt;/property&gt;&lt;property name="HelpUrl" type="string" /&gt;&lt;property name="Title" type="string" /&gt;&lt;property name="CatalogIconImageUrl" type="string"&gt;/_layouts/images/BCM_SharePoint_Shoes/wp_SubFooterLinks.gif&lt;/property&gt;&lt;property name="Direction" type="direction"&gt;NotSet&lt;/property&gt;&lt;property name="ChromeState" type="chromestate"&gt;Normal&lt;/property&gt;&lt;property name="AllowZoneChange" type="bool"&gt;True&lt;/property&gt;&lt;property name="AllowHide" type="bool"&gt;True&lt;/property&gt;&lt;property name="HelpMode" type="helpmode"&gt;Modeless&lt;/property&gt;&lt;property name="ExportMode" type="exportmode"&gt;All&lt;/property&gt;&lt;/properties&gt;&lt;/data&gt;&lt;/webPart&gt;&lt;/webParts&gt; </AllUsersWebPart> <NavBarPage Name="$Resources:core,nav_Home;" ID="1002" Position="Start" /> <NavBarPage Name="$Resources:core,nav_Home;" ID="0" Position="Start" /> </File> </Module> </Modules> LOG OUTPUT: 05/21/2010 12:22:55.11 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72nz Medium Videntityinfo::isFreshToken reported failure. 05/21/2010 12:22:55.19 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yv Medium Creating default lists 05/21/2010 12:22:55.19 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72lp Medium Creating directory Lists 05/21/2010 12:22:55.26 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yf Medium Creating list "Master Page Gallery" in web "http://mmm-dev-ll/sites/Shoes/test" at URL "_catalogs/masterpage", (setuppath: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\global\lists\mplib") 05/21/2010 12:22:55.28 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88y1 Medium No document templates uploaded for list "Master Page Gallery" -- none found for list template "100". 05/21/2010 12:22:55.28 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72kc Medium Failed to find generic XML file at "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\xml\onet.xml", falling back to global site definition. 05/21/2010 12:22:56.22 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yz Medium Creating default modules at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:56.22 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8e27 Medium Ensuring module folder _catalogs/masterpage 05/21/2010 12:22:56.89 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72h7 Medium Applying template "SubSite#1" to web at URL "http://mmm-dev-ll/sites/Shoes/test". 05/21/2010 12:22:57.09 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yy Medium Activating web-scoped features for template "SubSite#1" at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:57.12 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1c Medium Preparing 20 features for activation 05/21/2010 12:22:57.14 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1d Medium Feature Activation: Batch Activating Features at URL http://mmm-dev-ll/sites/Shoes/test 'AnnouncementsList' (ID: '00bfea71-d1ce-42de-9c63-a44004ce0104'), 'ContactsList' (ID: '00bfea71-7e6d-4186-9ba8-c047ac750105'), 'CustomList' (ID: '00bfea71-de22-43b2-a848-c05709900100'), 'DataSourceLibrary' (ID: '00bfea71-f381-423d-b9d1-da7a54c50110'), 'DiscussionsList' (ID: '00bfea71-6a49-43fa-b535-d15c05500108'), 'DocumentLibrary' (ID: '00bfea71-e717-4e80-aa17-d0c71b360101'), 'EventsList' (ID: '00bfea71-ec85-4903-972d-ebe475780106'), 'GanttTasksList' (ID: '00bfea71-513d-4ca0-96c2-6a47775c0119'), 'GridList' (ID: '00bfea71-3a1d-41d3-a0ee-651d11570120'), 'IssuesList' (ID: '00bfea71-5932-4f9c-ad71-1557e5751100'), 'LinksList' (ID: '00bfea71-2062-426c-90bf-714c59600103'), 'NoCodeWorkflowLibrary' (ID: '00bfe... 05/21/2010 12:22:57.14* w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1d Medium ...a71-f600-43f6-a895-40c0de7b0117'), 'PictureLibrary' (ID: '00bfea71-52d4-45b3-b544-b1c71b620109'), 'SurveysList' (ID: '00bfea71-eb8a-40b1-80c7-506be7590102'), 'TasksList' (ID: '00bfea71-a83e-497e-9ba0-7a5c597d0107'), 'WebPageLibrary' (ID: '00bfea71-c796-4402-9f2f-0eb9a6e71b18'), 'workflowProcessList' (ID: '00bfea71-2d77-4a75-9fca-76516689e21a'), 'WorkflowHistoryList' (ID: '00bfea71-4ea5-48d4-a4ad-305cf7030140'), 'XmlFormLibrary' (ID: '00bfea71-1e1d-4562-b56a-f05371bb0115'), 'TeamCollab' (ID: '00bfea71-4ea5-48d4-a4ad-7ea5c011abe5'), . 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1f Medium Feature Activation: Batch Activated Features at URL http://mmm-dev-ll/sites/Shoes/test 'AnnouncementsList' (ID: '00bfea71-d1ce-42de-9c63-a44004ce0104'), 'ContactsList' (ID: '00bfea71-7e6d-4186-9ba8-c047ac750105'), 'CustomList' (ID: '00bfea71-de22-43b2-a848-c05709900100'), 'DataSourceLibrary' (ID: '00bfea71-f381-423d-b9d1-da7a54c50110'), 'DiscussionsList' (ID: '00bfea71-6a49-43fa-b535-d15c05500108'), 'DocumentLibrary' (ID: '00bfea71-e717-4e80-aa17-d0c71b360101'), 'EventsList' (ID: '00bfea71-ec85-4903-972d-ebe475780106'), 'GanttTasksList' (ID: '00bfea71-513d-4ca0-96c2-6a47775c0119'), 'GridList' (ID: '00bfea71-3a1d-41d3-a0ee-651d11570120'), 'IssuesList' (ID: '00bfea71-5932-4f9c-ad71-1557e5751100'), 'LinksList' (ID: '00bfea71-2062-426c-90bf-714c59600103'), 'NoCodeWorkflowLibrary' (ID: '00bfea... 05/21/2010 12:22:57.15* w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8l1f Medium ...71-f600-43f6-a895-40c0de7b0117'), 'PictureLibrary' (ID: '00bfea71-52d4-45b3-b544-b1c71b620109'), 'SurveysList' (ID: '00bfea71-eb8a-40b1-80c7-506be7590102'), 'TasksList' (ID: '00bfea71-a83e-497e-9ba0-7a5c597d0107'), 'WebPageLibrary' (ID: '00bfea71-c796-4402-9f2f-0eb9a6e71b18'), 'workflowProcessList' (ID: '00bfea71-2d77-4a75-9fca-76516689e21a'), 'WorkflowHistoryList' (ID: '00bfea71-4ea5-48d4-a4ad-305cf7030140'), 'XmlFormLibrary' (ID: '00bfea71-1e1d-4562-b56a-f05371bb0115'), 'TeamCollab' (ID: '00bfea71-4ea5-48d4-a4ad-7ea5c011abe5'), . 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 88jb Medium Feature Activation: Activating Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9') at URL http://mmm-dev-ll/sites/Shoes/test. 05/21/2010 12:22:57.15 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 75fb Medium Calling 'FeatureActivated' method of SPFeatureReceiver for Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9'). 05/21/2010 12:22:57.20 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 75f8 Medium Feature Activation: Feature 'RadEditorFeatureRichText' (ID: '747755cd-d060-4663-961c-9b0cc43724e9') was activated at URL http://mmm-dev-ll/sites/Shoes/test. 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yv Medium Creating default lists 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72lp Medium Creating directory Lists 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services Fields 88yz Medium Creating default modules at URL "http://mmm-dev-ll/sites/Shoes/test" 05/21/2010 12:22:57.51 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 8e27 Medium Ensuring module folder _catalogs/masterpage 05/21/2010 12:22:57.56 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72ix Medium Not enough information to determine a list for module "Default". Assuming no list for this module. 05/21/2010 12:22:57.87 w3wp.exe (0x1E40) 0x18A4 Windows SharePoint Services General 72h8 Medium Successfully applied template "SubSite#1" to web at URL "http://mmm-dev-ll/sites/Shoes/test". 05/21/2010 12:22:59.48 w3wp.exe (0x1E40) 0x0980 Windows SharePoint Services General 8e2s Medium Unknown SPRequest error occurred. More information: 0x80070057 05/21/2010 12:23:07.06 OWSTIMER.EXE (0x0884) 0x106C Office Server Setup and Upgrade 8u3j High Registry key value {SearchThrottled} was not found under registry hive {Software\Microsoft\Office Server\12.0}. Assuming search sku is not throttled. 05/21/2010 12:23:07.08 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 90gf Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 05/21/2010 12:23:07.09 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8wni High Resuming default catalog with reason 'GPR_PROPAGATION' for application 'SharedServices1'... 05/21/2010 12:23:07.11 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8wnj High Resuming anchor text catalog with reason GPR_PROPAGATION' for application 'SharedServices1'... 05/21/2010 12:23:07.14 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 8dvl Medium Search application '3c6751cc-37b0-470a-bfa2-bfd0b5635fe1': Provision start addresses in default content source. 05/21/2010 12:23:07.15 OWSTIMER.EXE (0x0884) 0x106C Search Server Common MS Search Administration 7hmh High exception in SearchUpgradeProvisioner Keyword Config System.InvalidOperationException: jobServerSearchServiceInstance is null at Microsoft.Office.Server.Search.Administration.SearchUpgradeProvisioner..ctor(SearchServiceInstance searchServiceInstance) at Microsoft.Office.Server.Search.Administration.OSSPrimaryGathererProject.ProvisionContentSources() 05/21/2010 12:23:29.19 OWSTIMER.EXE (0x0884) 0x0FFC SharePoint Portal Server Business Data 79bv High Initiating BDC Cache Invalidation Check in AppDomain 'DefaultDomain' 05/21/2010 12:23:29.19 OWSTIMER.EXE (0x0884) 0x0FFC SharePoint Portal Server Business Data 79bx High Completed BDC Cache Invalidation Check in AppDomain 'DefaultDomain' 05/21/2010 12:23:45.84 OWSTIMER.EXE (0x0884) 0x08A8 SharePoint Portal Server SSO 8inc Medium In SSOService::Synch(), sso database conn string: 05/21/2010 12:23:50.80 OWSTIMER.EXE (0x0884) 0x0F14 Excel Services Excel Services Administration 8tqi Medium ExcelServerSharedWebApplication.Synchronize: Starting synchronize for instance of Excel Services in SSP 'SharedServices1'. 05/21/2010 12:23:50.80 OWSTIMER.EXE (0x0884) 0x0F14 Excel Services Excel Services Administration 8tqj Medium ExcelServerSharedWebApplication.Synchronize: Successfully synchronized instance of Excel Services in SSP 'SharedServices1'. 05/21/2010 12:23:52.31 w3wp.exe (0x1E40) 0x0980 SharePoint Portal Server Runtime 8gp7 Medium Topology cache updated. (AppDomain: /LM/W3SVC/1963195510/Root-1-129188762904047141)

    Read the article

  • Warning flagged by the 'rkhunter'

    - by gkt.pro
    when I scanned my Ubuntu 10.04 with rkhunter a root kit hunter toolkit, it gave following warning: Is there something that I have to worry about. [23:06:19] /usr/sbin/adduser [ Warning ] [23:06:19] Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable [23:06:20] /usr/sbin/rsyslogd [ Warning ] [23:06:20] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:22] /usr/bin/dpkg-query [ Warning ] [23:06:22] Warning: The file properties have changed: [23:06:24] /usr/bin/ldd [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:24] Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable [23:06:24] /usr/bin/logger [ Warning ] [23:06:24] Warning: The file properties have changed: [23:06:25] /usr/bin/mail [ Warning ] [23:06:25] Warning: The file '/usr/bin/mail' exists on the system, but it is not present in the rkhunter.dat file. [23:06:27] /usr/bin/sudo [ Warning ] [23:06:27] Warning: The file properties have changed: [23:06:29] /usr/bin/whereis [ Warning ] [23:06:29] Warning: The file properties have changed: [23:06:29] /usr/bin/lwp-request [ Warning ] [23:06:29] Warning: The command '/usr/bin/lwp-request' has been replaced by a script: /usr/bin/lwp-request: a /usr/bin/perl -w script text executable [23:06:29] /usr/bin/bsd-mailx [ Warning ] [23:06:29] Warning: The file '/usr/bin/bsd-mailx' exists on the system, but it is not present in the rkhunter.dat file. [23:06:30] /sbin/fsck [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:30] /sbin/ifdown [ Warning ] [23:06:30] Warning: The file properties have changed: [23:06:31] /sbin/ifup [ Warning ] [23:06:31] Warning: The file properties have changed: [23:06:34] /bin/dmesg [ Warning ] [23:06:34] Warning: The file properties have changed: [23:06:35] /bin/more [ Warning ] [23:06:35] Warning: The file properties have changed: [23:06:36] /bin/mount [ Warning ] [23:06:36] Warning: The file properties have changed: [23:06:37] /bin/which [ Warning ] [23:06:37] Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable [23:08:58] Checking /dev for suspicious file types [ Warning ] [23:08:58] Warning: Suspicious file types found in /dev: [23:08:58] Checking for hidden files and directories [ Warning ] [23:08:58] Warning: Hidden directory found: /etc/.java [23:08:58] Warning: Hidden directory found: /dev/.udev [23:08:58] Warning: Hidden directory found: /dev/.initramfs [23:09:01] Checking version of Exim MTA [ Warning ] [23:09:01] Warning: Application 'exim', version '4.71', is out of date, and possibly a security risk. [23:09:01] Checking version of GnuPG [ Warning ] [23:09:01] Warning: Application 'gpg', version '1.4.10', is out of date, and possibly a security risk. [23:09:01] Checking version of OpenSSL [ Warning ] [23:09:01] Warning: Application 'openssl', version '0.9.8k', is out of date, and possibly a security risk.

    Read the article

  • Understanding Linux SCSI queue depths

    - by Troels Arvin
    I'm experimenting with the effects of different SCSI queue depth values on a Dell server running CentOS Linux 5.4 (x86_64). The server has two QLogic QLE2560 FC HBAs connected via multipathing to a storage system. The storage system has allocated two LUNs to the server, each connected through four paths in an active-active-active-active round-robin configuration. All in all, the two LUNs exist as eight /dev/sdX devices, represented by two devices in /dev/mpath. I currently adjust the queue depth values in /etc/modprobe.conf and check the result (after rebooting) by looking in the seventh column of /proc/scsi/sg/devices. Two questions related to that: Is there a way to adjust queue depths without rebooting or unloading the qla2xxx kernel module? E.g., can I echo a new queue depth value into some /proc or /sys-like file to update the queue depth? If I set the queue depth to 128, is that 128 in total for all devices handled by the qla2xxx module?, or 128 for each HBA? (256 in total), or 128 for each of the eight /dev/sdX devices (1024 in toal)?, or 128 for each of the two /dev/mpath/... devices (256 in total)? This is important for me to know so that my server doesn't flood the storage system, affecting other servers connected to it.

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • CentOS 5.5 ext4 conversion problem - ext4 partition is recognized as ext3

    - by FractalizeR
    Hello. I had 5.4 machine. Upgraded to 5.5 today via yum upgrade. All went fine. Rebooted. Wanted to convert root partition to ext4 (I have three partitions: /boot, / and swap). All of them on software RAID 1 (root is /dev/md2). I did the following for converting yum install e4fsprogs tune2fs -O extents,uninit_bg,dir_index /dev/md2 nano /etc/fstab # I indicated here that my /dev/md2 is of ext4 uname -a mkinitrd -f /boot/initrd-2.6.18-194.3.1.el5.img 2.6.18-194.3.1.el5 Rebooted. I expected fsck to start automatically as said on some site. But it did not. Threw some error (don't remember exactly which). Ok, I booted linux rescue and executed fsck: fsck -t ext4 -fy /dev/md2 Partition went fine. But still when I boot main system, it says in log: "ext3-fs:" then something about not being able to mount ext3 partition due to unknown extended attributed (200). I booted linux rescue again. It loads fine and correctly determines all my machine partitions both ext3 (boot) and ext4 (/) under /mnt/sysimage just fine. I retried mkinitrd thing again watching it's output and ensured ext4 module is included into the system. I also edited menu.lst grub file to include rootfstype=ext4 kernel parameter. Bad luck. I still have message from ext3-fs about not being able to mount filesystem because of attributes and kernel panic immediately after. I checked /etc/fstab - it's fine and saying that root is of ext4. What did I do wrong? This machine is empty so I can just reformat it with 5.5 and recreate partitions to be originally ext4. But... I just want to know what did I do wrong.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >