Search Results

Search found 26412 results on 1057 pages for 'product key'.

Page 138/1057 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • PARTNER WEBCAST: INNOVATIONS IN PRODUCTS - PROGRAM

    - by mseika
    PARTNER WEBCAST: INNOVATIONS IN PRODUCTS - PROGRAMOCTOBER 1ST, 2012 AT 04:00 PM CET (03:00 PM GMT)Dear partner I am pleased to invite you to join the Innovations in Products – webcast. Innovations in Products will present Oracle Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle's product portfolio – for your and your customer's benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall product portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations.   To access previously presented 23 Applications Products presentations and 6 Public Sector Value Proposition presentations, please click here. You might want to bookmark the overall registration page Innovations in Products October 1stand the global event calendar page events.oracle.com. Delivery Format Innovations in Products – program is a series of FREE prerecorded Oracle product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you'll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months.The next Innovations in Products web casts will be presented as follows: October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. DurationMaximum 1 hour For further information please contact me Markku Rouhiainen. Best regards Markku RouhiainenDirector, Applications Partner Enablement EMEA

    Read the article

  • .htaccess modify rules and redirect if there's .php in the url

    - by Ron
    Hello everyone. I got the following code in my .htaccess: Options +FollowSymlinks RewriteBase /temp/test/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^about/(.*)/$ $1.php [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/downloadfile/$ file-download.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/$ download-donate.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/$ download.php?product=$1&version=$2 [L] RewriteRule ^newsletter-confirm/(.*)/$ newsletter-confirm.php?email=$1 [L] RewriteRule ^newsletter-remove/(.*)/$ newsletter-remove.php?email=$1 [L] RewriteRule ^(.*)/screenshots/$ screenshots.php?product=$1 [L] RewriteRule ^(.*)/(.*)/$ products.php?product=$1&page=$2 [L] RewriteRule ^schedule-manager/$ products.php?product=schedule-manager&page=view [L] RewriteRule ^visual-command-line/$ products.php?product=visual-command-line&page=view [L] RewriteRule ^windows-hider/$ products.php?product=windows-hider&page=view [L] RewriteRule ^(.*)/$ $1.php [L] RewriteRule ^products/$ products.php [L] everything work perfect. I would like to know how can I modify it so it will be less lines. I am pretty sure I can atleast remove 4-5 lines, but I dont know how. (merge the schedule-manager, visual-command-line and windows-hider, and some more). I know that the order of the rules is important, this order works - although I have no idea why, I just played with the rules until it worked. If you think that there'll be a bug with the following order please tell me where. Another thing - I would like to redirect for example www.myweb.com/products.php to www.myweb.com/products/ (I mean that the URL in the address bar will change). I dont know if the redirect can go along with my rewrite rules. Thank you.

    Read the article

  • Excel or Access: how to group several lines in a table and insert contents in columns? ("split column")

    - by Martin
    I have a table containing data of sold products (shown in the example on the left): Columns: Number of the order Product Name Attribute - specifies what is given in the following field "value", e. g. Customer Name or Product Variant Value - is the value of the Attribute Count - is the number of products of this variant sold in the order That means: Product B has 2 variants "c" and "d" Note that in Order 1 Product B was sold in Variant d only, because the letter "N" in field "D4" means "none". Note, that in OrdnerNo 3 Product B was sold only in Variant c, because for Variant d field "D9" is "N"!! This is confusing, but it is the structure of the original data (which I can not change). I need a way to convert the table on the left in a table like that on the right: one line for each product type Order Number Product Name Customer Name Count (number of products sold in this order) Variant - this is the problem, as it has to be filled with the So all rows with the same OrderNo and same product have to be grouped in to one, and I hope it is clear what I need. I tried to do it with Pivot Tables, but that fails, as the Count is always in each line, no matter if it has Value "N" or not and for the products without variants there is only one line for each order, however for products with variants there are several... So how could I create the right table with a VBA macro in MS Excel or maybe there is a trick in MS Access to do it directly or with an SQL query?

    Read the article

  • Scripting with the Sun ZFS Storage 7000 Appliance

    - by Geoff Ongley
    The Sun ZFS Storage 7000 appliance has a user friendly and easy to understand graphical web based interface we call the "BUI" or "Browser User Interface".This interface is very useful for many tasks, but in some cases a script (or workflow) may be more appropriate, such as:Repetitive tasksTasks which work on (or obtain information about) a large number of shares or usersTasks which are triggered by an alert threshold (workflows)Tasks where you want a only very basic input, but a consistent output (workflows)The appliance scripting language is based on ECMAscript 3 (close to javascript). I'm not going to cover ECMAscript 3 in great depth (I'm far from an expert here), but I would like to show you some neat things you can do with the appliance, to get you started based on what I have found from my own playing around.I'm making the assumption you have some sort of programming background, and understand variables, arrays, functions to some extent - but of course if something is not clear, please let me know so I can fix it up or clarify it.Variable Declarations and ArraysVariablesECMAScript is a dynamically and weakly typed language. If you don't know what that means, google is your friend - but at a high level it means we can just declare variables with no specific type and on the fly.For example, I can declare a variable and use it straight away in the middle of my code, for example:projects=list();Which makes projects an array of values that are returned from the list(); function (which is usable in most contexts). With this kind of variable, I can do things like:projects.length (this property on array tells you how many objects are in it, good for for loops etc). Alternatively, I could say:projects=3;and now projects is just a simple number.Should we declare variables like this so loosely? In my opinion, the answer is no - I feel it is a better practice to declare variables you are going to use, before you use them - and given them an initial value. You can do so as follows:var myVariable=0;To demonstrate the ability to just randomly assign and change the type of variables, you can create a simple script at the cli as follows (bold for input):fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("Number of projects is: %d\n",projects.length);("." to run)> projects=152;("." to run)> printf("Value of the projects variable as an integer is now: %d\n",projects);("." to run)> .Number of projects is: 7Value of the projects variable as an integer is now: 152You can also confirm this behaviour by checking the typeof variable we are dealing with:fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> projects=152;("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> .var projects is of type objectvar projects is of type numberArraysSo you likely noticed that we have already touched on arrays, as the list(); (in the shares context) stored an array into the 'projects' variable.But what if you want to declare your own array? Easy! This is very similar to Java and other languages, we just instantiate a brand new "Array" object using the keyword new:var myArray = new Array();will create an array called "myArray".A quick example:fishy10:> script("." to run)> testArray = new Array();("." to run)> testArray[0]="This";("." to run)> testArray[1]="is";("." to run)> testArray[2]="just";("." to run)> testArray[3]="a";("." to run)> testArray[4]="test";("." to run)> for (i=0; i < testArray.length; i++)("." to run)> {("." to run)>    printf("Array element %d is %s\n",i,testArray[i]);("." to run)> }("." to run)> .Array element 0 is ThisArray element 1 is isArray element 2 is justArray element 3 is aArray element 4 is testWorking With LoopsFor LoopFor loops are very similar to those you will see in C, java and several other languages. One of the key differences here is, as you were made aware earlier, we can be a bit more sloppy with our variable declarations.The general way you would likely use a for loop is as follows:for (variable; test-case; modifier for variable){}For example, you may wish to declare a variable i as 0; and a MAX_ITERATIONS variable to determine how many times this loop should repeat:var i=0;var MAX_ITERATIONS=10;And then, use this variable to be tested against some case existing (has i reached MAX_ITERATIONS? - if not, increment i using i++);for (i=0; i < MAX_ITERATIONS; i++){ // some work to do}So lets run something like this on the appliance:fishy10:> script("." to run)> var i=0;("." to run)> var MAX_ITERATIONS=10;("." to run)> for (i=0; i < MAX_ITERATIONS; i++)("." to run)> {("." to run)>    printf("The number is %d\n",i);("." to run)> }("." to run)> .The number is 0The number is 1The number is 2The number is 3The number is 4The number is 5The number is 6The number is 7The number is 8The number is 9While LoopWhile loops again are very similar to other languages, we loop "while" a condition is met. For example:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n",counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Counter is 10Loop has ended and Counter is 11So what do we notice here? Something has actually gone wrong - counter will technically be 11 once the loop completes... Why is this?Well, if we have a loop like this, where the 'while' condition that will end the loop may be set based on some other condition(s) existing (such as the counter has reached 10) - we must ensure that we  terminate this iteration of the loop when the condition is met - otherwise the rest of the code will be followed which may not be desirable. In other words, like in other languages, we will only ever check the loop condition once we are ready to perform the next iteration, so any other code after we set "isTen" to be true, will still be executed as we can see it was above.We can avoid this by adding a break into our loop once we know we have set the condition - this will stop the rest of the logic being processed in this iteration (and as such, counter will not be incremented). So lets try that again:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>            break;("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n", counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Loop has ended and Counter is 10Much better!Methods to Obtain and Manipulate DataGet MethodThe get method allows you to get simple properties from an object, for example a quota from a user. The syntax is fairly simple:var myVariable=get('property');An example of where you may wish to use this, is when you are getting a bunch of information about a user (such as quota information when in a shares context):var users=list();for(k=0; k < users.length; k++){     user=users[k];     run('select ' + user);     var username=get('name');     var usage=get('usage');     var quota=get('quota');...Which you can then use to your advantage - to print or manipulate infomation (you could change a user's information with a set method, based on the information returned from the get method). The set method is explained next.Set MethodThe set method can be used in a simple manner, similar to get. The syntax for set is:set('property','value'); // where value is a string, if it was a number, you don't need quotesFor example, we could set the quota on a share as follows (first observing the initial value):fishy10:shares default/test-geoff> script("." to run)> var currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> set('quota','30G');("." to run)> run('commit');("." to run)> currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> .Current Quota is: 0Current Quota is: 32212254720This shows us using both the get and set methods as can be used in scripts, of course when only setting an individual share, the above is overkill - it would be much easier to set it manually at the cli using 'set quota=3G' and then 'commit'.List MethodThe list method can be very powerful, especially in more complex scripts which iterate over large amounts of data and manipulate it if so desired. The general way you will use list is as follows:var myVar=list();Which will make "myVar" an array, containing all the objects in the relevant context (this could be a list of users, shares, projects, etc). You can then gather or manipulate data very easily.We could list all the shares and mountpoints in a given project for example:fishy10:shares another-project> script("." to run)> var shares=list();("." to run)> for (i=0; i < shares.length; i++)("." to run)> {("." to run)>    run('select ' + shares[i]);("." to run)>    var mountpoint=get('mountpoint');("." to run)>    printf("Share %s discovered, has mountpoint %s\n",shares[i],mountpoint);("." to run)>    run('done');("." to run)> }("." to run)> .Share and-another discovered, has mountpoint /export/another-project/and-anotherShare another-share discovered, has mountpoint /export/another-project/another-shareShare bob discovered, has mountpoint /export/another-projectShare more-shares-for-all discovered, has mountpoint /export/another-project/more-shares-for-allShare yep discovered, has mountpoint /export/another-project/yepWriting More Complex and Re-Usable CodeFunctionsThe best way to be able to write more complex code is to use functions to split up repeatable or reusable sections of your code. This also makes your more complex code easier to read and understand for other programmers.We write functions as follows:function functionName(variable1,variable2,...,variableN){}For example, we could have a function that takes a project name as input, and lists shares for that project (assuming we're already in the 'project' context - context is important!):function getShares(proj){        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                printf("Discovered share: %s\n",shares[i]);        }        run('done'); // exit selected project}Commenting your CodeLike any other language, a large part of making it readable and understandable is to comment it. You can use the same comment style as in C and Java amongst other languages.In other words, sngle line comments use://at the beginning of the comment.Multi line comments use:/*at the beginning, and:*/ at the end.For example, here we will use both:fishy10:> script("." to run)> // This is a test comment("." to run)> printf("doing some work...\n");("." to run)> /* This is a multi-line("." to run)> comment which I will span across("." to run)> three lines in total */("." to run)> printf("doing some more work...\n");("." to run)> .doing some work...doing some more work...Your comments do not have to be on their own, they can begin (particularly with single line comments this is handy) at the end of a statement, for examplevar projects=list(); // The variable projects is an array containing all projects on the system.Try and Catch StatementsYou may be used to using try and catch statements in other languages, and they can (and should) be utilised in your code to catch expected or unexpected error conditions, that you do NOT wish to stop your code from executing (if you do not catch these errors, your script will exit!):try{  // do some work}catch(err) // Catch any error that could occur{ // do something here under the error condition}For example, you may wish to only execute some code if a context can be reached. If you can't perform certain actions under certain circumstances, that may be perfectly acceptable.For example if you want to test a condition that only makes sense when looking at a SMB/NFS share, but does not make sense when you hit an iscsi or FC LUN, you don't want to stop all processing of other shares you may not have covered yet.For example we may wish to obtain quota information on all shares for all users on a share (but this makes no sense for a LUN):function getShareQuota(shar) // Get quota for each user of this share{        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","----");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}Running Scripts Remotely over SSHAs you have likely noticed, writing and running scripts for all but the simplest jobs directly on the appliance is not going to be a lot of fun.There's a couple of choices on what you can do here:Create scripts on a remote system and run them over sshCreate scripts, wrapping them in workflow code, so they are stored on the appliance and can be triggered under certain circumstances (like a threshold being reached)We'll cover the first one here, and then cover workflows later on (as these are for the most part just scripts with some wrapper information around them).Creating a SSH Public/Private SSH Key PairLog on to your handy Solaris box (You wouldn't be using any other OS, right? :P) and use ssh-keygen to create a pair of ssh keys. I'm storing this separate to my normal key:[geoff@lightning ~] ssh-keygen -t rsa -b 1024Generating public/private rsa key pair.Enter file in which to save the key (/export/home/geoff/.ssh/id_rsa): /export/home/geoff/.ssh/nas_key_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /export/home/geoff/.ssh/nas_key_rsa.Your public key has been saved in /export/home/geoff/.ssh/nas_key_rsa.pub.The key fingerprint is:7f:3d:53:f0:2a:5e:8b:2d:94:2a:55:77:66:5c:9b:14 geoff@lightningInstalling the Public Key on the ApplianceOn your Solaris host, observe the public key:[geoff@lightning ~] cat .ssh/nas_key_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc= geoff@lightningNow, copy and paste everything after "ssh-rsa" and before "user@hostname" - in this case, geoff@lightning. That is, this bit:AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc=Logon to your appliance and get into the preferences -> keys area for this user (root):[geoff@lightning ~] ssh [email protected]: Last login: Mon Dec  6 17:13:28 2010 from 192.168.0.2fishy10:> configuration usersfishy10:configuration users> select rootfishy10:configuration users root> preferences fishy10:configuration users root preferences> keysOR do it all in one hit:fishy10:> configuration users select root preferences keysNow, we create a new public key that will be accepted for this user and set the type to RSA:fishy10:configuration users root preferences keys> createfishy10:configuration users root preferences key (uncommitted)> set type=RSASet the key itself using the string copied previously (between ssh-rsa and user@host), and set the key ensuring you put double quotes around it (eg. set key="<key>"):fishy10:configuration users root preferences key (uncommitted)> set key="AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc="Now set the comment for this key (do not use spaces):fishy10:configuration users root preferences key (uncommitted)> set comment="LightningRSAKey" Commit the new key:fishy10:configuration users root preferences key (uncommitted)> commitVerify the key is there:fishy10:configuration users root preferences keys> lsKeys:NAME     MODIFIED              TYPE   COMMENT                                  key-000  2010-10-25 20:56:42   RSA    cycloneRSAKey                           key-001  2010-12-6 17:44:53    RSA    LightningRSAKey                         As you can see, we now have my new key, and a previous key I have created on this appliance.Running your Script over SSH from a Remote SystemHere I have created a basic test script, and saved it as test.ecma3:[geoff@lightning ~] cat test.ecma3 script// This is a test script, By Geoff Ongley 2010.printf("Testing script remotely over ssh\n");.Now, we can run this script remotely with our keyless login:[geoff@lightning ~] ssh -i .ssh/nas_key_rsa root@fishy10 < test.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Testing script remotely over sshPutting it Together - An Example Completed Quota Gathering ScriptSo now we have a lot of the basics to creating a script, let us do something useful, like, find out how much every user is using, on every share on the system (you will recognise some of the code from my previous examples): script/************************************** Quick and Dirty Quota Check script ** Written By Geoff Ongley            ** 25 October 2010                    **************************************/function getUserQuota(usr){        run('select ' + usr);        var username=get('name');        var usage=get('usage');        var quota=get('quota');        var usage_g=usage / 1073741824; // convert bytes to gigabytes        var quota_g=quota / 1073741824; // as above        var quota_percent=0        if (quota > 0)        {                quota_percent=(usage / quota)*(100/1);        }        printf("    %20s        %8.2f           %8.2f           %d%%\n",username,usage_g,quota_g,quota_percent);        run('done'); // done with this selected user}function getShareQuota(shar){        //printf("DEBUG: selecting share %s\n", shar);        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","--------");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}function getShares(proj){        //printf("DEBUG: selecting project %s\n",proj);        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                share=shares[j];                getShareQuota(share);        }        run('done'); // exit selected project}function getProjects(){        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                var project=projects[i];                getShares(project);        }        run('done'); // exit context for all projects}getProjects();.Which can be run as follows, and will print information like this:[geoff@lightning ~/FISHWORKS_SCRIPTS] ssh -i ~/.ssh/nas_key_rsa root@fishy10 < get_quota_utilisation.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Project: another-project  SHARE: and-another                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                 geoffro            0.05            0.00        0%                   Billy            0.10            0.00        0%                    root            0.00            0.00        0%            testing-user            0.05            0.00        0%  SHARE: another-share                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                 geoffro            0.05            0.49        9%            testing-user            0.05            0.02        249%                   Billy            0.10            0.29        33%  SHARE: bob                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%  SHARE: more-shares-for-all                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                   Billy            0.10            0.00        0%            testing-user            0.05            0.00        0%                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%                 geoffro            0.05            0.00        0%  SHARE: yep                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                   Billy            0.10            0.01        999%            testing-user            0.05            0.49        9%                 geoffro            0.05            0.00        0%Project: default  SHARE: Test-LUN    SKIPPING Test-LUN - This is NOT a NFS or CIFs share, not looking for users  SHARE: test-geoff                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                 geoffro            0.05            0.00        0%                    root            3.18           10.00        31%                    uucp            0.00            0.00        0%                  nobody            0.59            0.49        119%^CKilled by signal 2.Creating a WorkflowWorkflows are scripts that we store on the appliance, and can have the script execute either on request (even from the BUI), or on an event such as a threshold being met.Workflow BasicsA workflow allows you to create a simple process that can be executed either via the BUI interface interactively, or by an alert being raised (for some threshold being reached, for example).The basics parameters you will have to set for your "workflow object" (notice you're creating a variable, that embodies ECMAScript) are as follows (parameters is optional):name: A name for this workflowdescription: A Description for the workflowparameters: A set of input parameters (useful when you need user input to execute the workflow)execute: The code, the script itself to execute, which will be function (parameters)With parameters, you can specify things like this (slightly modified sample taken from the System Administration Guide):          ...parameters:        variableParam1:         {                             label: 'Name of Share',                             type: 'String'                  },                  variableParam2                  {                             label: 'Share Size',                             type: 'size'                  },execute: ....};  Note the commas separating the sections of name, parameters, execute, and so on. This is important!Also - there is plenty of properties you can set on the parameters for your workflow, these are described in the Sun ZFS Storage System Administration Guide.Creating a Basic Workflow from a Basic ScriptTo make a basic script into a basic workflow, you need to wrap the following around your script to create a 'workflow' object:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function() {// (basic script goes here, minus the "script" at the beginning, and "." at the end)}};However, it appears (at least in my experience to date) that the workflow object may only be happy with one function in the execute parameter - either that or I'm doing something wrong. As far as I can tell, after execute: you should only have a basic one function context like so:execute: function(){}To deal with this, and to give an example similar to our script earlier, I have created another simple quota check, to show the same basic functionality, but in a workflow format:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function () {        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                run('select ' + projects[i]);                shares=list('filesystem');                printf("Project: %s\n", projects[i]);                for(j=0; j < shares.length; j++)                {                        run('select ' +shares[j]);                        try                        {                                run('users');                                printf("  SHARE: %s\n", shares[j]);                                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","-------");                                users=list();                                for(k=0; k < users.length; k++)                                {                                        run('select ' + users[k]);                                        username=get('name');                                        usage=get('usage');                                        quota=get('quota');                                        usage_g=usage / 1073741824; // convert bytes to gigabytes                                        quota_g=quota / 1073741824; // as above                                        quota_percent=0                                        if (quota > 0)                                        {                                                quota_percent=(usage / quota)*(100/1);                                        }                                        printf("    %20s        %8.2f   %8.2f   %d%%\n",username,usage_g,quota_g,quota_percent);                                        run('done');                                }                                run('done'); // exit user context                        }                        catch(err)                        {                        //      printf("    %s is a LUN, Not looking for users\n", shares[j]);                        }                        run('done'); // exit selected share context                }                run('done'); // exit project context        }        }};SummaryThe Sun ZFS Storage 7000 Appliance offers lots of different and interesting features to Sun/Oracle customers, including the world renowned Analytics. Hopefully the above will help you to think of new creative things you could be doing by taking advantage of one of the other neat features, the internal scripting engine!Some references are below to help you continue learning more, I'll update this post as I do the same! Enjoy...More information on ECMAScript 3A complete reference to ECMAScript 3 which will help you learn more of the details you may be interested in, can be found here:http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdfMore Information on Administering the Sun ZFS Storage 7000The Sun ZFS Storage 7000 System Administration guide can be a useful reference point, and can be found here:http://wikis.sun.com/download/attachments/186238602/2010_Q3_2_ADMIN.pdf

    Read the article

  • tripledes encryption not yielding same results in PHP and C#

    - by Jones
    When I encrypt with C# I get arTdPqWOg6VppOqUD6mGITjb24+x5vJjfAufNQ4DN7rVEtpDmhFnMeJGg4n5y1BN static void Main(string[] args) { Encoding byteEncoder = Encoding.Default; String key = "ShHhd8a08JhJiho98ayslcjh"; String message = "Let us meet at 9 o'clock at the secret place."; String encryption = Encrypt(message, key, false); String decryption = Decrypt(encryption , key, false); Console.WriteLine("Message: {0}", message); Console.WriteLine("Encryption: {0}", encryption); Console.WriteLine("Decryption: {0}", decryption); } public static string Encrypt(string toEncrypt, string key, bool useHashing) { byte[] keyArray; byte[] toEncryptArray = UTF8Encoding.UTF8.GetBytes(toEncrypt); if (useHashing) { MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider(); keyArray = hashmd5.ComputeHash(UTF8Encoding.UTF8.GetBytes(key)); } else keyArray = UTF8Encoding.UTF8.GetBytes(key); TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider(); tdes.Key = keyArray; tdes.Mode = CipherMode.ECB; tdes.Padding = PaddingMode.PKCS7; ICryptoTransform cTransform = tdes.CreateEncryptor(); byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length); return Convert.ToBase64String(resultArray, 0, resultArray.Length); } public static string Decrypt(string toDecrypt, string key, bool useHashing) { byte[] keyArray; byte[] toEncryptArray = Convert.FromBase64String(toDecrypt); if (useHashing) { MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider(); keyArray = hashmd5.ComputeHash(UTF8Encoding.UTF8.GetBytes(key)); } else keyArray = UTF8Encoding.UTF8.GetBytes(key); TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider(); tdes.Key = keyArray; tdes.Mode = CipherMode.ECB; tdes.Padding = PaddingMode.PKCS7; ICryptoTransform cTransform = tdes.CreateDecryptor(); byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length); return UTF8Encoding.UTF8.GetString(resultArray); } When I encrypt with PHP I get: arTdPqWOg6VppOqUD6mGITjb24+x5vJjfAufNQ4DN7rVEtpDmhFnMVM+W/WFlksR <?php $key = "ShHhd8a08JhJiho98ayslcjh"; $input = "Let us meet at 9 o'clock at the secret place."; $td = mcrypt_module_open('tripledes', '', 'ecb', ''); $iv = mcrypt_create_iv (mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, $key, $iv); $encrypted_data = mcrypt_generic($td, $input); mcrypt_generic_deinit($td); mcrypt_module_close($td); echo base64_encode($encrypted_data); ?> I don't know enough about cryptography to figure out why. Any ideas? Thanks.

    Read the article

  • is this aes encryption wrapper safe ? - yet another take...

    - by user393087
    After taking into accound answers for my questions here and here I created (well may-be) improved version of my wrapper. The key issue was what if an attacker is knowing what is encoded - he might then find the key and encode another messages. So I added XOR before encryption. I also in this version prepend IV to the data as was suggested. sha256 on key is only for making sure the key is as long as needed for the aes alg, but I know that key should not be plain text but calculated with many iterations to prevent dictionary attack function aes192ctr_en($data,$key) { $iv = mcrypt_create_iv(24,MCRYPT_DEV_URANDOM); $xor = mcrypt_create_iv(24,MCRYPT_DEV_URANDOM); $key = hash_hmac('sha256',$key,$iv,true); $data = $xor.((string)$data ^ (string)str_repeat($xor,(strlen($data)/24)+1)); $data = hash('md5',$data,true).$data; return $iv.mcrypt_encrypt('rijndael-192',$key,$data,'ctr',$iv); } function aes192ctr_de($data,$key) { $iv = substr($data,0,24); $data = substr($data,24); $key = hash_hmac('sha256',$key,$iv,true); $data = mcrypt_decrypt('rijndael-192',$key,$data,'ctr',$iv); $md5 = substr($data,0,16); $data = substr($data,16); if (hash('md5',$data,true)!==$md5) return false; $xor = substr($data,0,24); $data = substr($data,24); $data = ((string)$data ^ (string)str_repeat($xor,(strlen($data)/24)+1)); return $data; } $encrypted = aes192ctr_en('secret text','password'); echo $encrypted; echo aes192ctr_de($encrypted,'password'); another question is if ctr mode is ok in this context, would it be better if I use cbc mode ? Again, by safe I mean if an attacter could guess password if he knows exact text that was encrypted and knows above method. I assume random and long password here. Maybe instead of XOR will be safer to random initial data with another run of aes or other simpler alg like TEA or trivium ?

    Read the article

  • SEO Friendly URL Rewriter Parameters

    - by Kristen
    I would appreciate you advice on how to incorporate parameters into SEO Friendly URLs We have decided to have the "techie" parameters first, followed by the "SEO Slug" \product\ABC123\fly-your-own-helicopter much like S.O. - if the SEO Slug changes, or is truncated, or missing, we still have the Product and ABC123 parameters; various articles say that having such extra data doesn't harm SEO ranking. We need to have additional parameters; we could use "-" to separate parameters as it makes them look similar to the SEO Slug, or we could/should use something else? \product\ABC123-BOYTOY-2\boys\toys\fly-your-own-helicopter This is product=ABC123, in Category=BOYTOY and Page=2. We also want to keep the hierarchy as flat as possible, and thus I think: \product-ABC123-BOYTOY-2\boys\toys\fly-your-own-helicopter would be better - one level less. We have a number of "zones", e.g. \product-ABC123\seo-slug-for-product \category-BOYTOY\seo-slug-for-category \article-54321\terms-and-conditions it would help us a lot if we could just user our 5 digit Page ID number instead, so these would become \12345-ABC123\seo-slug-for-product \23456-BOYTOY\seo-slug-for-category \54321\terms-and-conditions (Products & Categories have a number of different Page IDs for different templates, this style would take us straight to the right one) I would appreciate your insight into what parameter separators to use, and if the leading techi-data is going work well for us. In case relevant: Classic ASP application on IIS7 + MSSQL2008 Product & Category codes contain A-Z, 0-9, "_" only.

    Read the article

  • PHP .htaccess issue, specific/dynamic keywords

    - by Kunal
    Here goes my .htaccess file's content. Options +FollowSymLinks RewriteEngine On RewriteRule ^online-products$ products.php?type=online RewriteRule ^land-products$ products.php?type=land RewriteRule ^payment-methods$ payment-methods.php RewriteRule ^withdrawal-methods$ withdrawal-methods.php RewriteRule ^deposit-methods$ deposit-methods.php RewriteRule ^product-bonuses$ product-bonuses.php RewriteRule ^law-and-regulations$ law-and-regulations.php RewriteRule ^product-news$ product-news.php RewriteRule ^product-games$ product-games.php RewriteRule ^no-products$ no-products.php RewriteRule ^page-not-found$ notfound.php RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^casinos/(.*)$ product.php?id=$1 RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ cms.php?link=$1 ErrorDocument 404 /notfound.php What I am trying to achive here is that the first set of rules apply to some specific keywords which should redirect to specific hard coded pages. But anything else parat from those keywords should redirect to cms.php as a parameter as you can see. But the problem is that every keyword is getting redirected to cms.php. Whereas I want any other keyword apart from which are already hard coded in .htaccess file should go cms.php. Not just every keyword. Example: www.sitename.com/online-products -> www.sitename.com/products.php?type=online www.sitename.com/about-the-website -> www.sitename.com/cms.php?id=about-the-website www.sitename.com/product-news -> www.sitename.com/product-news.php Also another issue I am facing is I can not use any keyword with space. Like "online-products" is fine, but I can't use "online products". Please help me out with your expert knowledge. Many thanks in advance for your kind help. Appreciate it.

    Read the article

  • Jersey, JAXB and getting an objectextending an abstract class as a parameter

    - by krajol
    I want to get an object as a parameter of a POST request. I got an abstract superclass that is called Promotion and subclasses Product and Percent. Here's how I try to get a request: @POST @Consumes(MediaType.APPLICATION_XML) @Produces(MediaType.APPLICATION_XML) @Path("promotion/") public Promotion createPromotion(Promotion promotion) { Product p = (Product) promotion; System.out.println(p.getPriceAfter()); return promotion; } and here's how I use JAXB in classes' definitions: @XmlRootElement(name="promotion") @XmlSeeAlso({Product.class,Percent.class}) public abstract class Promotion { //body } @XmlRootElement(name="promotion") public class Product extends Promotion { //body } @XmlRootElement(name="promotion") public class Percent extends Promotion { //body } So the problem now is when I send a POST request with a body like this: <promotion> <priceBefore>34.5</priceBefore> <marked>false</marked> <distance>44</distance> </promotion> and I try to cast it to Product (as in this case, fields 'marked' and 'distance' are from Promotion class and 'priceBefore' is from Product class) I get an Exception: java.lang.ClassCastException: Percent cannot be cast to Product. It seems like Percent is chosen as a 'default' subclass. Why is that and how can I get an object that is a Product?

    Read the article

  • PHP class_exists always returns true

    - by Ali
    I have a PHP class that needs some pre-defined globals before the file is included: File: includes/Product.inc.php if (class_exists('Product')) { return; } // This class requires some predefined globals if ( !isset($gLogger) || !isset($db) || !isset($glob) ) { return; } class Product { ... } The above is included in other PHP files that need to use Product using require_once. Anyone who wants to use Product must however ensure those globals are available, at least that's the idea. I recently debugged an issue in a function within the Product class which was caused because $gLogger was null. The code requiring the above Product.inc.php had not bothered to create the $gLogger. So The question is how was this class ever included if $gLogger was null? I tried to debug the code (xdebug in NetBeans), put a breakpoint at the start of Product.inc.php to find out and every time it came to the if (class_exists('Product')) clause it would simply step in and return thus never getting to the global checks. So how was it ever included the first time? This is PHP 5.1+ running under MAMP (Apache/MySQL). I don't have any auto loaders defined.

    Read the article

  • Groupby distinct how can I do that?

    - by Christophe Debove
    Hello, <?xml version="1.0"?> <Products> <product> <productId >1</productId> <textdate>11/11/2011</textdate> <price>200</price> </product> <product> <productId >6</productId> <textdate>11/11/2011</textdate> <price>100</price> </product> <product> <productId >1</productId> <textdate>16/11/2011</textdate> <price>290</price> </product> </Products> I've this xml and I want an xslt transformation that regroup product something like this : { product 1 : 11/11/2011 - 200 16/11/2011 - 290 } { product 6 11/11/2011 - 100 } I work with xslt 1.0 Asp .net C# XslCompiledTransformation

    Read the article

  • When destroying one record, another one gets destroyed

    - by normalocity
    Products (like an iPod Classic) :has_many = :listings, :dependent = :destroy Listings (like "My name is Joe, and I have an iPod for sale) :belongs_to = :product So, if I delete a given Product, all the listings that point to it get deleted. That makes sense, and is by design. However, I am writing a "merge" function, where you merge two Products into one, and combine their Listings. So, let's say my two products are "iPod Color" and "iPod Classic", and I want to merge the two. What I want to do is say, "iPod Color, merge into iPod Classic", and result should be that: All the iPod Color Listings are re-pointed to the iPod Classic product After the product_id change, the Listing(s) are saved I then delete the "iPod Color" product Well, that should all work fine, without deleting any Listings. However, I've got this controller, and for whatever reason when I destroy the "iPod Color" Product, even after confirming that the Listings have been moved to "iPod Classic" and saved to the database, the Listings that were previously pointed to "iPod Color" get destroyed as well, and I can't figure out why. It's as if they are retaining some kind of link to the destroyed product, and therefore begin destroyed themselves. What painfully obvious thing am I missing? def merge merging_from = Product.find(params[:id]) merging_to = Product.find_by_model(params[:merging_to]) unless merging_to.nil? unless merging_from.nil? unless merging_from == merging_to # you don't want to merge something with itself merging_from.listings.each do |l| l.product = merging_to l.save end # through some debugging, I've confirmed that my missing Listings are disappearing as a result of the following destroy call merging_from.destroy end end end

    Read the article

  • Which is faster: Appropriate data input or appropriate data structure?

    - by Anon
    I have a dataset whose columns look like this: Consumer ID | Product ID | Time Period | Product Score 1 | 1 | 1 | 2 2 | 1 | 2 | 3 and so on. As part of a program (written in C) I need to process the product scores given by all consumers for a particular product and time period combination for all possible combinations. Suppose that there are 3 products and 2 time periods. Then I need to process the product scores for all possible combinations as shown below: Product ID | Time Period 1 | 1 1 | 2 2 | 1 2 | 2 3 | 1 3 | 2 I will need to process the data along the above lines lots of times ( 10k) and the dataset is fairly large (e.g., 48k consumers, 100 products, 24 time periods etc). So speed is an issue. I came up with two ways to process the data and am wondering which is the faster approach or perhaps it does not matter much? (speed matters but not at the cost of undue maintenance/readability): Sort the data on product id and time period and then loop through the data to extract data for all possible combinations. Store the consumer ids of all consumers who provided product scores for a particular combination of product id and time period and process the data accordingly. Any thoughts? Any other way to speed up the processing? Thanks

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Steve Miranda is the Next Guest on The Bill Kutik Radio Show®

    - by Jay Richey, HCM Product Marketing
    Be sure to catch Steve Miranda, Senior Vice President for Oracle Fusion Development, tomorrow on The Bill Kutik Radio Show®.  Bill will be asking the tough questions once again and Steve will be answering.  It is sure to be a lively discussion, with more details on Fusion and Oracle's co-existence strategy with PeopleSoft, E-Business Suite, and JD Edwards HCM applications.  Wednesday, March 28, at noon ET, 9 am PT.  Listen live, afterward to the replay, or download from iTunes. http://www.knowledgeinfusion.com/ondemand/docs/DOC-9903 Produced by Knowledge Infusion and hosted by independent industry analyst Bill Kutik, the bi-weekly interview show provides leading HR business content and insight into up-to-the-minute trends.

    Read the article

  • WPF ListView as a DataGrid – Part 3

    - by psheriff
    I have had a lot of great feedback on the blog post about turning the ListView into a DataGrid by creating GridViewColumn objects on the fly. So, in the last 2 parts, I showed a couple of different methods for accomplishing this. Let’s now look at one more and that is use Reflection to extract the properties from a Product, Customer, or Employee object to create the columns. Yes, Reflection is a slower approach, but you could create the columns one time then cache the View object for re-use. Another potential drawback is you may have columns in your object that you do not wish to display on your ListView. But, just because so many people asked, here is how to accomplish this using Reflection.   Figure 1: Use Reflection to create GridViewColumns. Using Reflection to gather property names is actually quite simple. First you need to pass any type (Product, Customer, Employee, etc.) to a method like I did in my last two blog posts on this subject. Below is the method that I created in the WPFListViewCommon class that now uses reflection. C#public static GridView CreateGridViewColumns(Type anyType){  // Create the GridView  GridView gv = new GridView();  GridViewColumn gvc;   // Get the public properties.  PropertyInfo[] propInfo =          anyType.GetProperties(BindingFlags.Public |                                BindingFlags.Instance);   foreach (PropertyInfo item in propInfo)  {    gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.Name);    gvc.Header = item.Name;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns( _  ByVal anyType As Type) As GridView  ' Create the GridView   Dim gv As New GridView()  Dim gvc As GridViewColumn   ' Get the public properties.   Dim propInfo As PropertyInfo() = _    anyType.GetProperties(BindingFlags.Public Or _                          BindingFlags.Instance)   For Each item As PropertyInfo In propInfo    gvc = New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.Name)    gvc.Header = item.Name    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd Function The key to using Relection is using the GetProperties method on the type you pass in. When you pass in a Product object as Type, you can now use the GetProperties method and specify, via flags, which properties you wish to return. In the code that I wrote, I am just retrieving the Public properties and only those that are Instance properties. I do not want any static/Shared properties or private properties. GetProperties returns an array of PropertyInfo objects. You can loop through this array and build your GridViewColumn objects by reading the Name property from the PropertyInfo object. Build the Product Screen To populate the ListView shown in Figure 1, you might write code like the following: C#private void CollectionSample(){  Product prod = new Product();   // Setup the GridView Columns  lstData.View =      WPFListViewCommon.CreateGridViewColumns(typeOf(Product));  lstData.DataContext = prod.GetProducts();} VB.NETPrivate Sub CollectionSample()  Dim prod As New Product()   ' Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns( _       GetType(Product))  lstData.DataContext = prod.GetProducts()End Sub All you need to do now is to pass in a Type object from your Product class that you can get by using the typeOf() function in C# or the GetType() function in VB. That’s all there is to it! Summary There are so many different ways to approach the same problem in programming. That is what makes programming so much fun! In this blog post I showed you how to create ListView columns on the fly using Reflection. This gives you a lot of flexibility without having to write extra code as was done previously. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid – Part 3" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".  

    Read the article

  • Catch the Replay! Steve Miranda on The Bill Kutik Radio Show®

    - by Jay Richey, HCM Product Marketing
    Steve Miranda, Senior Vice President for Oracle Fusion Development, was the guest star on this past Wednesday's The Bill Kutik Radio Show®.  Catch the replay or download to iTunes to hear Bill's hard-hitting questions and Steve's candid answers.  http://www.knowledgeinfusion.com/ondemand/docs/DOC-9903 Produced by Knowledge Infusion and hosted by independent industry analyst Bill Kutik, the bi-weekly interview show provides leading HR business content and insight into up-to-the-minute trends.

    Read the article

  • I can't install using Wubi due to permission denied error

    - by Taksh Sharma
    I can't install ubuntu 11.10 inside my windows 7. It shows permission denied while installation. It gave a log file having the following data: 03-29 20:19 DEBUG TaskList: # Running tasklist... 03-29 20:19 DEBUG TaskList: ## Running select_target_dir... 03-29 20:19 INFO WindowsBackend: Installing into D:\ubuntu 03-29 20:19 DEBUG TaskList: ## Finished select_target_dir 03-29 20:19 DEBUG TaskList: ## Running create_dir_structure... 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install\boot 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks\boot 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\disks\boot\grub 03-29 20:19 DEBUG CommonBackend: Creating dir D:\ubuntu\install\boot\grub 03-29 20:19 DEBUG TaskList: ## Finished create_dir_structure 03-29 20:19 DEBUG TaskList: ## Running uncompress_target_dir... 03-29 20:19 DEBUG TaskList: ## Finished uncompress_target_dir 03-29 20:19 DEBUG TaskList: ## Running create_uninstaller... 03-29 20:19 DEBUG WindowsBackend: Copying uninstaller E:\wubi.exe -> D:\ubuntu\uninstall-wubi.exe 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString D:\ubuntu\uninstall-wubi.exe 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir D:\ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon D:\ubuntu\Ubuntu.ico 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 11.10-rev241 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 03-29 20:19 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 03-29 20:19 DEBUG TaskList: ## Finished create_uninstaller 03-29 20:19 DEBUG TaskList: ## Running copy_installation_files... 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\data\custom-installation -> D:\ubuntu\install\custom-installation 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\winboot -> D:\ubuntu\winboot 03-29 20:19 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pylB911.tmp\data\images\Ubuntu.ico -> D:\ubuntu\Ubuntu.ico 03-29 20:19 DEBUG TaskList: ## Finished copy_installation_files 03-29 20:19 DEBUG TaskList: ## Running get_iso... 03-29 20:19 DEBUG TaskList: New task copy_file 03-29 20:19 DEBUG TaskList: ### Running copy_file... 03-29 20:23 ERROR TaskList: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:23 DEBUG TaskList: # Cancelling tasklist 03-29 20:23 DEBUG TaskList: New task check_iso 03-29 20:23 ERROR root: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:23 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 03-29 20:23 DEBUG TaskList: # Cancelling tasklist 03-29 20:23 DEBUG TaskList: # Finished tasklist 03-29 20:29 INFO root: === wubi 11.10 rev241 === 03-29 20:29 DEBUG root: Logfile is c:\users\home\appdata\local\temp\wubi-11.10-rev241.log 03-29 20:29 DEBUG root: sys.argv = ['main.pyo', '--exefile="E:\\wubi.exe"'] 03-29 20:29 DEBUG CommonBackend: data_dir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data 03-29 20:29 DEBUG WindowsBackend: 7z=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\bin\7z.exe 03-29 20:29 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 03-29 20:29 DEBUG CommonBackend: Fetching basic info... 03-29 20:29 DEBUG CommonBackend: original_exe=E:\wubi.exe 03-29 20:29 DEBUG CommonBackend: platform=win32 03-29 20:29 DEBUG CommonBackend: osname=nt 03-29 20:29 DEBUG CommonBackend: language=en_IN 03-29 20:29 DEBUG CommonBackend: encoding=cp1252 03-29 20:29 DEBUG WindowsBackend: arch=amd64 03-29 20:29 DEBUG CommonBackend: Parsing isolist=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\isolist.ini 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-i386 03-29 20:29 DEBUG WindowsBackend: Fetching host info... 03-29 20:29 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 03-29 20:29 DEBUG WindowsBackend: windows version=vista 03-29 20:29 DEBUG WindowsBackend: windows_version2=Windows 7 Home Basic 03-29 20:29 DEBUG WindowsBackend: windows_sp=None 03-29 20:29 DEBUG WindowsBackend: windows_build=7601 03-29 20:29 DEBUG WindowsBackend: gmt=5 03-29 20:29 DEBUG WindowsBackend: country=IN 03-29 20:29 DEBUG WindowsBackend: timezone=Asia/Calcutta 03-29 20:29 DEBUG WindowsBackend: windows_username=Home 03-29 20:29 DEBUG WindowsBackend: user_full_name=Home 03-29 20:29 DEBUG WindowsBackend: user_directory=C:\Users\Home 03-29 20:29 DEBUG WindowsBackend: windows_language_code=1033 03-29 20:29 DEBUG WindowsBackend: windows_language=English 03-29 20:29 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 03-29 20:29 DEBUG WindowsBackend: bootloader=vista 03-29 20:29 DEBUG WindowsBackend: system_drive=Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(D: hd 12742.5507813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free cdfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(F: cd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: uninstaller_path=D:\ubuntu\uninstall-wubi.exe 03-29 20:29 DEBUG WindowsBackend: previous_target_dir=D:\ubuntu 03-29 20:29 DEBUG WindowsBackend: previous_distro_name=Ubuntu 03-29 20:29 DEBUG WindowsBackend: keyboard_id=67699721 03-29 20:29 DEBUG WindowsBackend: keyboard_layout=us 03-29 20:29 DEBUG WindowsBackend: keyboard_variant= 03-29 20:29 DEBUG CommonBackend: python locale=('en_IN', 'cp1252') 03-29 20:29 DEBUG CommonBackend: locale=en_IN 03-29 20:29 DEBUG WindowsBackend: total_memory_mb=3893.859375 03-29 20:29 DEBUG CommonBackend: Searching ISOs on USB devices 03-29 20:29 DEBUG CommonBackend: Searching for local CDs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: parsing info from str=Ubuntu 11.10 "Oneiric Ocelot" - Release i386 (20111012) 03-29 20:29 DEBUG Distro: parsed info={'name': 'Ubuntu', 'subversion': 'Release', 'version': '11.10', 'build': '20111012', 'codename': 'Oneiric Ocelot', 'arch': 'i386'} 03-29 20:29 INFO Distro: Found a valid CD for Ubuntu: E:\ 03-29 20:29 INFO root: Running the CD menu... 03-29 20:29 DEBUG WindowsFrontend: __init__... 03-29 20:29 DEBUG WindowsFrontend: on_init... 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO root: CD menu finished 03-29 20:29 INFO root: Already installed, running the uninstaller... 03-29 20:29 INFO root: Running the uninstaller... 03-29 20:29 INFO CommonBackend: This is the uninstaller running 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO root: Received settings 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 DEBUG TaskList: # Running tasklist... 03-29 20:29 DEBUG TaskList: ## Running Remove bootloader entry... 03-29 20:29 DEBUG WindowsBackend: Could not find bcd id 03-29 20:29 DEBUG WindowsBackend: undo_bootini C: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(C: hd 61135.1523438 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: undo_bootini D: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(D: hd 12742.5507813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: undo_bootini G: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: undo_bootini Q: 03-29 20:29 DEBUG WindowsBackend: undo_configsys Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG TaskList: ## Finished Remove bootloader entry 03-29 20:29 DEBUG TaskList: ## Running Remove target dir... 03-29 20:29 DEBUG CommonBackend: Deleting D:\ubuntu 03-29 20:29 DEBUG TaskList: ## Finished Remove target dir 03-29 20:29 DEBUG TaskList: ## Running Remove registry key... 03-29 20:29 DEBUG TaskList: ## Finished Remove registry key 03-29 20:29 DEBUG TaskList: # Finished tasklist 03-29 20:29 INFO root: Almost finished uninstalling 03-29 20:29 INFO root: Finished uninstallation 03-29 20:29 DEBUG CommonBackend: Fetching basic info... 03-29 20:29 DEBUG CommonBackend: original_exe=E:\wubi.exe 03-29 20:29 DEBUG CommonBackend: platform=win32 03-29 20:29 DEBUG CommonBackend: osname=nt 03-29 20:29 DEBUG WindowsBackend: arch=amd64 03-29 20:29 DEBUG CommonBackend: Parsing isolist=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\isolist.ini 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Xubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Ubuntu-i386 03-29 20:29 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 03-29 20:29 DEBUG CommonBackend: Adding distro Kubuntu-i386 03-29 20:29 DEBUG WindowsBackend: Fetching host info... 03-29 20:29 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 03-29 20:29 DEBUG WindowsBackend: windows version=vista 03-29 20:29 DEBUG WindowsBackend: windows_version2=Windows 7 Home Basic 03-29 20:29 DEBUG WindowsBackend: windows_sp=None 03-29 20:29 DEBUG WindowsBackend: windows_build=7601 03-29 20:29 DEBUG WindowsBackend: gmt=5 03-29 20:29 DEBUG WindowsBackend: country=IN 03-29 20:29 DEBUG WindowsBackend: timezone=Asia/Calcutta 03-29 20:29 DEBUG WindowsBackend: windows_username=Home 03-29 20:29 DEBUG WindowsBackend: user_full_name=Home 03-29 20:29 DEBUG WindowsBackend: user_directory=C:\Users\Home 03-29 20:29 DEBUG WindowsBackend: windows_language_code=1033 03-29 20:29 DEBUG WindowsBackend: windows_language=English 03-29 20:29 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 03-29 20:29 DEBUG WindowsBackend: bootloader=vista 03-29 20:29 DEBUG WindowsBackend: system_drive=Drive(C: hd 61134.8632813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(C: hd 61134.8632813 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(D: hd 12953.140625 mb free ntfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free cdfs) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(F: cd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(G: hd 93.22265625 mb free fat32) 03-29 20:29 DEBUG WindowsBackend: drive=Drive(Q: hd 0.0 mb free ) 03-29 20:29 DEBUG WindowsBackend: uninstaller_path=None 03-29 20:29 DEBUG WindowsBackend: previous_target_dir=None 03-29 20:29 DEBUG WindowsBackend: previous_distro_name=None 03-29 20:29 DEBUG WindowsBackend: keyboard_id=67699721 03-29 20:29 DEBUG WindowsBackend: keyboard_layout=us 03-29 20:29 DEBUG WindowsBackend: keyboard_variant= 03-29 20:29 DEBUG WindowsBackend: total_memory_mb=3893.859375 03-29 20:29 DEBUG CommonBackend: Searching ISOs on USB devices 03-29 20:29 DEBUG CommonBackend: Searching for local CDs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether C:\Users\Home\AppData\Local\Temp\pyl3487.tmp is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 03-29 20:29 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 03-29 20:29 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 03-29 20:29 INFO Distro: Found a valid CD for Ubuntu: E:\ 03-29 20:29 INFO root: Running the installer... 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:29 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_IN', 'en'] 03-29 20:30 DEBUG WinuiInstallationPage: target_drive=C:, installation_size=8000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=taksh 03-29 20:30 INFO root: Received settings 03-29 20:30 INFO WinuiPage: appname=wubi, localedir=C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\translations, languages=['en_US', 'en'] 03-29 20:30 DEBUG TaskList: # Running tasklist... 03-29 20:30 DEBUG TaskList: ## Running select_target_dir... 03-29 20:30 INFO WindowsBackend: Installing into C:\ubuntu 03-29 20:30 DEBUG TaskList: ## Finished select_target_dir 03-29 20:30 DEBUG TaskList: ## Running create_dir_structure... 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\disks\boot\grub 03-29 20:30 DEBUG CommonBackend: Creating dir C:\ubuntu\install\boot\grub 03-29 20:30 DEBUG TaskList: ## Finished create_dir_structure 03-29 20:30 DEBUG TaskList: ## Running uncompress_target_dir... 03-29 20:30 DEBUG TaskList: ## Finished uncompress_target_dir 03-29 20:30 DEBUG TaskList: ## Running create_uninstaller... 03-29 20:30 DEBUG WindowsBackend: Copying uninstaller E:\wubi.exe -> C:\ubuntu\uninstall-wubi.exe 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString C:\ubuntu\uninstall-wubi.exe 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir C:\ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon C:\ubuntu\Ubuntu.ico 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 11.10-rev241 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 03-29 20:30 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 03-29 20:30 DEBUG TaskList: ## Finished create_uninstaller 03-29 20:30 DEBUG TaskList: ## Running copy_installation_files... 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\winboot -> C:\ubuntu\winboot 03-29 20:30 DEBUG WindowsBackend: Copying C:\Users\Home\AppData\Local\Temp\pyl3487.tmp\data\images\Ubuntu.ico -> C:\ubuntu\Ubuntu.ico 03-29 20:30 DEBUG TaskList: ## Finished copy_installation_files 03-29 20:30 DEBUG TaskList: ## Running get_iso... 03-29 20:30 DEBUG TaskList: New task copy_file 03-29 20:30 DEBUG TaskList: ### Running copy_file... 03-29 20:34 ERROR TaskList: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:34 DEBUG TaskList: # Cancelling tasklist 03-29 20:34 DEBUG TaskList: New task check_iso 03-29 20:34 ERROR root: [Errno 13] Permission denied Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\utils.py", line 202, in copy_file IOError: [Errno 13] Permission denied 03-29 20:34 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 03-29 20:34 DEBUG TaskList: # Cancelling tasklist 03-29 20:34 DEBUG TaskList: # Finished tasklist I have no idea what's the problem is. I'm a kind of newbie. I'm using win7 64bit, and installing as an administrator. Please help me out!

    Read the article

  • Ldap ssh authentication is super slow... any way to speed it up?

    - by Johnathon
    I am running OpenSUSE. Here is the output of ssh -vvv: OpenSSH_5.8p1, OpenSSL 1.0.0c 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <ipaddress> [ipaddress] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/root/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ipaddress" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 138/256 debug2: bits set: 529/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA cb:7f:ff:2e:65:28:f0:95:e6:8a:71:24:2a:67:02:2b debug3: load_hostkeys: loading entries for host "<ipaddress>" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug1: Host '<ipaddress>' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/id_rsa (0xb789d5c8) debug2: key: /root/.ssh/id_dsa ((nil)) debug2: key: /root/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey,keyboard-interactive debug3: start over, passed a different list publickey,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply It hangs here for a good 30 seconds to a minute then debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug1: Trying private key: /root/.ssh/id_ecdsa debug3: no such identity: /root/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 I added PubkeyAuthentication no to the /etc/ssh/ssh_config and the /etc/ssh/sshd_config which makes it faster getting to the password prompt, but the password prompt still takes some time. Any way to fix that? Here is where the password hangs debug3: packet_send2: adding 32 (len 25 padlen 7 extra_pad 64) debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 0 debug3: packet_send2: adding 48 (len 10 padlen 6 extra_pad 64) debug1: Authentication succeeded (keyboard-interactive). Authenticated to ipaddress ([ipaddress]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. FIXED!!!!!!!!!!!!!! What is did... In the nsswitch_conf I had ldap included in the group and passwd which slows it down a lot. Thank you everybody for your input passwd: compat group: files hosts: files dns networks: files dns

    Read the article

  • Integrating NetBeans for Raspberry Pi Java Development

    - by speakjava
    Raspberry Pi IDE Java Development The Raspberry Pi is an incredible device for building embedded Java applications but, despite being able to run an IDE on the Pi it really pushes things to the limit.  It's much better to use a PC or laptop to develop the code and then deploy and test on the Pi.  What I thought I'd do in this blog entry was to run through the steps necessary to set up NetBeans on a PC for Java code development, with automatic deployment to the Raspberry Pi as part of the build process. I will assume that your starting point is a Raspberry Pi with an SD card that has one of the latest Raspbian images on it.  This is good because this now includes the JDK 7 as part of the distro, so no need to download and install a separate JDK.  I will also assume that you have installed the JDK and NetBeans on your PC.  These can be downloaded here. There are numerous approaches you can take to this including mounting the file system from the Raspberry Pi remotely on your development machine.  I tried this and I found that NetBeans got rather upset if the file system disappeared either through network interruption or the Raspberry Pi being turned off.  The following method uses copying over SSH, which will fail more gracefully if the Pi is not responding. Step 1: Enable SSH on the Raspberry Pi To run the Java applications you create you will need to start Java on the Raspberry Pi with the appropriate class name, classpath and parameters.  For non-JavaFX applications you can either do this from the Raspberry Pi desktop or, if you do not have a monitor connected through a remote command line.  To execute the remote command line you need to enable SSH (a secure shell login over the network) and connect using an application like PuTTY. You can enable SSH when you first boot the Raspberry Pi, as the raspi-config program runs automatically.  You can also run it at any time afterwards by running the command: sudo raspi-config This will bring up a menu of options.  Select '8 Advanced Options' and on the next screen select 'A$ SSH'.  Select 'Enable' and the task is complete. Step 2: Configure Raspberry Pi Networking By default, the Raspbian distribution configures the ethernet connection to use DHCP rather than a static IP address.  You can continue to use DHCP if you want, but to avoid having to potentially change settings whenever you reboot the Pi using a static IP address is simpler. To configure this on the Pi you need to edit the /etc/network/interfaces file.  You will need to do this as root using the sudo command, so something like sudo vi /etc/network/interfaces.  In this file you will see this line: iface eth0 inet dhcp This needs to be changed to the following: iface eth0 inet static     address 10.0.0.2     gateway 10.0.0.254     netmask 255.255.255.0 You will need to change the values in red to an appropriate IP address and to match the address of your gateway. Step 3: Create a Public-Private Key Pair On Your Development Machine How you do this will depend on which Operating system you are using: Mac OSX or Linux Run the command: ssh-keygen -t rsa Press ENTER/RETURN to accept the default destination for saving the key.  We do not need a passphrase so simply press ENTER/RETURN for an empty one and once more to confirm. The key will be created in the file .ssh/id_rsa.pub in your home directory.  Display the contents of this file using the cat command: cat ~/.ssh/id_rsa.pub Open a window, SSH to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if the file does not exist).  Copy and paste the contents of the id_rsa.pub file to the authorized_keys file and save it. Windows Since Windows is not a UNIX derivative operating system it does not include the necessary key generating software by default.  To generate the key I used puttygen.exe which is available from the same site that provides the PuTTY application, here. Download this and run it on your Windows machine.  Follow the instructions to generate a key.  I remove the key comment, but you can leave that if you want. Click "Save private key", confirm that you don't want to use a passphrase and select a filename and location for the key. Copy the public key from the part of the window marked, "Public key for pasting into OpenSSH authorized_keys file".  Use PuTTY to connect to the Raspberry Pi and login.  Change directory to .ssh and edit the authorized_keys file (don't worry if this does not exist).  Paste the key information at the end of this file and save it. Logout and then start PuTTY again.  This time we need to create a saved session using the private key.  Type in the IP address of the Raspberry Pi in the "Hostname (or IP address)" field and expand "SSH" under the "Connection" category.  Select "Auth" (see the screen shot below). Click the "Browse" button under "Private key file for authentication" and select the file you saved from puttygen. Go back to the "Session" category and enter a short name in the saved sessions field, as shown below.  Click "Save" to save the session. Step 4: Test The Configuration You should now have the ability to use scp (Mac/Linux) or pscp.exe (Windows) to copy files from your development machine to the Raspberry Pi without needing to authenticate by typing in a password (so we can automate the process in NetBeans).  It's a good idea to test this using something like: scp /tmp/foo [email protected]:/tmp on Linux or Mac or pscp.exe foo pi@raspi:/tmp on Windows (Note that we use the saved configuration name instead of the IP address or hostname so the public key is picked up). pscp.exe is another tool available from the creators of PuTTY. Step 5: Configure the NetBeans Build Script Start NetBeans and create a new project (or open an existing one that you want to deploy automatically to the Raspberry Pi). Select the Files tab in the explorer window and expand your project.  You will see a build.xml file.  Double click this to edit it. This file will mostly be comments.  At the end (but within the </project> tag) add the XML for <target name="-post-jar">, shown below Here's the code again in case you want to use cut-and-paste: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="scp" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="[email protected]:NetBeans/CopyTest"/>   </exec>  </target> For Windows it will be slightly different: <target name="-post-jar">   <echo level="info" message="Copying dist directory to remote Pi"/>   <exec executable="C:\pi\putty\pscp.exe" dir="${basedir}">     <arg line="-r"/>     <arg value="dist"/>     <arg value="pi@raspi:NetBeans/CopyTest"/>   </exec> </target> You will also need to ensure that pscp.exe is in your PATH (or specify a fully qualified pathname). From now on when you clean and build the project the dist directory will automatically be copied to the Raspberry Pi ready for testing.

    Read the article

  • [GEEK SCHOOL] Network Security 3: Windows Defender and a Malware-Free System

    - by Ciprian Rusen
    In this second lesson we are going to talk about one of the most confusing security products that are bundled with Windows: Windows Defender. In the past, this product has had a bad reputation and for good reason – it was very limited in its capacity to protect your computer from real-world malware. However, the latest version included in Windows 8.x operating systems is much different than in the past and it provides real protection to its users. The nice thing about Windows Defender in its current incarnation, is that it protects your system from the start, so there are never gaps in coverage. We will start this lesson by explaining what Windows Defender is in Windows 7 and Vista versus what it is in Windows 8, and what product to use if you are using an earlier version. We next will explore how to use Windows Defender, how to improve its default settings, and how to deal with the alerts that it displays. As you will see, Windows Defender will have you using its list of quarantined items a lot more often than other security products. This is why we will explain in detail how to work with it and remove malware for good or restore those items that are only false alarms. Lastly, you will learn how to turn off Windows Defender if you no longer want to use it and you prefer a third-party security product in its place and then how to enable it back, if you have changed your mind about using it. Upon completion, you should have a thorough understanding of your system’s default anti-malware options, or how to protect your system expeditiously. What is Windows Defender? Unfortunately there is no one clear answer to this question because of the confusing way Microsoft has chosen to name its security products. Windows Defender is a different product, depending on the Windows operating system you are using. If you use Windows Vista or Windows 7, then Windows Defender is a security tool that protects your computer from spyware. This but one form of malware made out of tools and applications that monitor your movements on the Internet or the activities you make on your computer. Spyware tends to send the information that is collected to a remote server and it is later used in all kinds of malicious purposes, from displaying advertising you don’t want, to using your personal data, etc. However, there are many other types of malware on the Internet and this version of Windows Defender is not able to protect users from any of them. That’s why, if you are using Windows 7 or earlier, we strongly recommend that you disable Windows Defender and install a more complete security product like Microsoft Security Essentials, or third-party security products from specialized security vendors. If you use Windows 8.x operating systems, then Windows Defender is the same thing as Microsoft Security Essentials: a decent security product that protects your computer in-real time from viruses and spyware. The fact that this product protects your computer also from viruses, not just from spyware, makes a huge difference. If you don’t want to pay for security products, Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) are good alternatives. Windows Defender in Windows 8.x and Microsoft Security Essentials are the same product, only their name is different. In this lesson, we will use the Windows Defender version from Windows 8.x but our instructions apply also to Microsoft Security Essentials (MSE) in Windows 7 and Windows Vista. If you want to download Microsoft Security Essentials and try it out, we recommend you to use this page: Download Microsoft Security Essentials. There you will find both 32-bit and 64-bit editions of this product as well versions in multiple languages. How to Use and Configure Windows Defender Using Windows Defender (MSE) is very easy to use. To start, search for “defender” on the Windows 8.x Start screen and click or tap the “Windows Defender” search result. In Windows 7, search for “security” in the Start Menu search box and click “Microsoft Security Essentials”. Windows Defender has four tabs which give you access to the following tools and options: Home – here you can view the security status of your system. If everything is alright, then it will be colored in green. If there are some warnings to consider, then it will be colored in yellow, and if there are threats that must be dealt with, everything will be colored in red. On the right side of the “Home” tab you will find options for scanning your computer for viruses and spyware. On the bottom of the tab you will find information about when the last scan was performed and what type of scan it was. Update – here you will find information on whether this product is up-to-date. You will learn when it was last updated and the versions of the definitions it is using. You can also trigger a manual update. History – here you can access quarantined items, see which items you’ve allowed to run on your PC even if they were identified as malware by Windows Defender, and view a complete list with all the malicious items Windows Defender has detected on your PC. In order to access all these lists and work with them, you need to be signed in as an administrator. Settings – this is the tab where you can turn on the real-time protection service, exclude files, file types, processes, and locations from its scans as well as access a couple of more advanced settings. The only difference between Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) is that, in the “Settings” tab, Microsoft Security Essentials allows you to set when to run scheduled scans while Windows Defender lacks this option.

    Read the article

  • Discover How to Deliver Measurable Business Value from your HCM Strategy

    - by Jay Richey, HCM Product Marketing
    Join our live Webcast on Wednesday, July 13 to learn how to fine tune your HCM strategy and better utlize your Oracle HCM investment.  In this session you'll learn how to access, analyze and act on information from multiple sources to ensure that all workforce decisions are focused on meeting overall business objectives. Date:Wednesday, July 13, 2011Time:10:00 a.m. PT / 1:00 p.m. ET Register now!

    Read the article

  • Cloud Without Compromise – Oracle Fusion HCM

    - by Jay Richey, HCM Product Marketing
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} We’ve all heard about the cloud, and many HR organizations have already launched cloud initiatives. But too many cloud HCM vendors can’t deliver on their promise to lower costs, reduce risk and improve efficiency. When only 5% of CEOs are satisfied with HR*, something needs to change. Only Oracle delivers the promise of the cloud in deployment models tailored to your needs – giving you cloud without compromise. Oracle Fusion HCM provides a unified system with all the analytics and reporting tools you need. Join us for an engaging and insightful webcast this Wednesday, November 16th, at 9am Pacific to learn more about how Oracle Fusion HCM can fulfill your promise. http://www.oracle.com/us/dm/sev100018463-wwmk11040178mpp002-521274.html

    Read the article

  • Staying Ahead of the Curve - Deloitte's 2012 Human Capital Trends Webcast | June 13th

    - by Jay Richey, HCM Product Marketing
    Businesses today are calling on HR to leap ahead and help to manage change in the face of complex challenges that touch so many parts of the enterprise. This webinar will provide an overview of eight major Human Capital Trends surfacing in 2012. Understanding the trends — what they mean for both leading HR and for leading the business — is an opportunity for organizations to be proactive and stay ahead of the curve. June 13, 2012 12:00 p.m. – 2:00 p.m. CT Online Featured Speakers: Michael Gretczko Principal, Deloitte Consulting LLP, Human Capital Practice Dan Helfrich Principal, Deloitte Consulting LLP, Federal Human Capital Practice Leader Greg Vert Senior Consultant, Deloitte Consulting Evite & Registration:  http://www.oracle.com/us/dm/75810-wwmk11040178mpp035c007-oem-1633667.html

    Read the article

  • Class Design and Structure Online Web Store

    - by Phorce
    I hope I have asked this in the right forum. Basically, we're designing an Online Store and I am designing the class structure for ordering a product and want some clarification on what I have so far: So a customer comes, selects their product, chooses the quantity and selects 'Purchase' (I am using the Facade Pattern - So subsystems execute when this action is performed). My class structure: < Order > < Product > <Customer > There is no inheritance, more Association < Order has < Product , < Customer has < Order . Does this structure look ok? I've noticed that I don't handle the "Quantity" separately, I was just going to add this into the "Product" class, but, do you think it should be a class of it's own? Hope someone can help.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >