Search Results

Search found 17265 results on 691 pages for 'oracle partnernetwork exchange'.

Page 525/691 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Adding attachments to HumanTasks *beforehand*

    - by ccasares
    For an demo I'm preparing along with a partner, we need to add some attachments to a HumanTask beforehand, that is, the attachment must be associated already to the Task by the time the user opens its Form. How to achieve this?, indeed it's quite simple and just a matter of some mappings to the Task's input execData structure. Oracle BPM supports "default" attachments (which use BPM tables) or UCM-based ones. The way to insert attachments for both methods is pretty similar. With default attachments When using default attachments, first we need to have the attachment payload as part of the BPM process, that is, must be contained in a variable. Normally the attachment content is binary, so we'll need first to convert it to a base64-string (not covered on this blog entry). What we need to do is just to map the following execData parameters as part of the input of the HumanTask: execData.attachment[n].content            <-- the base64 payload data execData.attachment[n].mimeType           <-- depends on your attachment                                               (e.g.: "application/pdf") execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- TASK execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  Bear in mind that the attachment structure is a repetitive one. So if you need to add more than one attachment, you'll need to use XSLT mapping. If not, the Assign mapper automatically adds [1] for the iteration.  With UCM-based attachments With UCM-based attachments, the procedure is basically the same. We'll need to map some extra fields and not to map others. The tricky part with UCM-based attachments is what we need to know beforehand about the attachment itself. Of course, we don't need to have the payload, but a couple of information from the attachment that must be checked in already in UCM. First, let's see the mappings: execData.attachment[n].mimeType           <-- Document's dFormat attribute (1) execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- UCM execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  execData.attachment[n].uri                <-- "ecm://<dID>" where dID is document's dID                                      attribute (2) execData.attachment[n].ucmDocType         <-- Document's dDocType attribute (3) execData.attachment[n].securityGroup      <-- Document's dSecurityGroup attribute (4) execData.attachment[n].revision           <-- Document's dRevisionID attribute (5) execData.attachment[n].ucmMetadataItem[1].name  <-- "DocUrl" execData.attachment[n].ucmMetadataItem[1].type  <-- STRING execData.attachment[n].ucmMetadataItem[1].value <-- Document's url attribute (6)  Where to get those (n) fields? In my case I get those from a Search call to UCM (not covered on this blog entry) As I mentioned above, we must know which UCM document we're going to attach. We may know its ID, its name... whatever we need to uniquely identify it calling the IDC Search method. This method returns ALL the info we need to attach the different fields labeled with a number above.  The only tricky one is (6). UCM Search service returns the url attribute as a context-root without hostname:port. E.g.: /cs/groups/public/documents/document/dgvs/mdaw/~edisp/ccasareswcptel000239.pdf However we do need to include the full qualified URL when mapping (6). Where to get the http://<hostname>:<port> value? Honestly, I have no clue. What I use to do is to use a BPM property that can always be modified at runtime if needed. There are some other fields that might be needed in the execData.attachment structure, like account (if UCM's is using Accounts). But for demos I've never needed to use them, so I'm not sure whether it's necessary or not. Feel free to add some comments to this entry if you know it ;-)  That's all folks. Should you need help with the UCM Search service, let me know and I can write a quick entry on that topic.

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Sorting and Filtering By Model-Based LOV Display Value

    - by Steven Davelaar
    If you use a model-based LOV and you use display type "choice", then ADF nicely displays the display value, even if the table is read-only. In the screen shot below, you see the RegionName attribute displayed instead of the RegionId. This is accomplished by the model-based LOV, I did not modify the Countries view object to include a join with Regions.  Also note the sort icon, the table is sorted by RegionId. This sorting typically results in a bug reported by your test team. Europe really shouldn't come before America when sorting ascending, right? To fix this, we could of course change the Countries view object query and add a join with the Regions table to include the RegionName attribute. If the table is updateable, we still need the choice list, so we need to move the model-based LOV from the RegionId attribute to the RegionName attribute and hide the RegionId attribute in the table. But that is a lot of work for such a simple requirement, in particular if we have lots of model-based choice lists in our view object. Fortunately, there is an easier way to do this, with some generic code in your view object base class that fixes this at once for all model-based choice lists that we have defined in our application. The trick is to override the method getSortCriteria() in the base view object class. By default, this method returns null because the sorting is done in the database through a SQL Order By clause. However, if the getSortCriteria method does return a sort criteria the framework will perform in memory sorting which is what we need to achieve sorting by region name. So, inside this method we need to evaluate the Order By clause, and if the order by column matches an attribute that has a model-based LOV choicelist defined with a display attribute that is different from the value attribute, we need to return a sort criterria. Here is the complete code of this method: public SortCriteria[] getSortCriteria() {   String orderBy = getOrderByClause();          if (orderBy!=null )   {     boolean descending = false;     if (orderBy.endsWith(" DESC"))      {       descending = true;       orderBy = orderBy.substring(0,orderBy.length()-5);     }     // extract column name, is part after the dot     int dotpos = orderBy.lastIndexOf(".");     String columnName = orderBy.substring(dotpos+1);     // loop over attributes and find matching attribute     AttributeDef orderByAttrDef = null;     for (AttributeDef attrDef : getAttributeDefs())     {       if (columnName.equals(attrDef.getColumnName()))       {         orderByAttrDef = attrDef;         break;       }     }     if (orderByAttrDef!=null && "choice".equals(orderByAttrDef.getProperty("CONTROLTYPE"))          && orderByAttrDef.getListBindingDef()!=null)     {       String orderbyAttr = orderByAttrDef.getName();       String[] displayAttrs = orderByAttrDef.getListBindingDef().getListDisplayAttrNames();       String[] listAttrs = orderByAttrDef.getListBindingDef().getListAttrNames();       // if first list display attributes is not the same as first list attribute, than the value       // displayed is different from the value copied back to the order by attribute, in which case we need to       // use our custom comparator       if (displayAttrs!=null && listAttrs!=null && displayAttrs.length>0 && !displayAttrs[0].equals(listAttrs[0]))       {                  SortCriteriaImpl sc1 = new SortCriteriaImpl(orderbyAttr, descending);         SortCriteria[] sc = new SortCriteriaImpl[]{sc1};         return sc;                           }     }     }   return super.getSortCriteria(); } If this method returns a sort criteria, then the framework will call the sort method on the view object. The sort method uses a Comparator object to determine the sequence in which the rows should be returned. This comparator is retrieved by calling the getRowComparator method on the view object. So, to ensure sorting by our display value, we need to override this method to return our custom comparator: public Comparator getRowComparator() {   return new LovDisplayAttributeRowComparator(getSortCriteria()); } The custom comparator class extends the default RowComparator class and overrides the method compareRows and looks up the choice display value to compare the two rows. The complete code of this class is included in the sample application.  With this code in place, clicking on the Region sort icon nicely sorts the countries by RegionName, as you can see below. When using the Query-By-Example table filter at the top of the table, you typically want to use the same choice list to filter the rows. One way to do that is documented in ADF code corner sample 16 - How To Customize the ADF Faces Table Filter.The solution in this sample is perfectly fine to use. This sample requires you to define a separate iterator binding and associated tree binding to populate the choice list in the table filter area using the af:iterator tag. You might be able to reuse the same LOV view object instance in this iterator binding that is used as view accessor for the model-bassed LOV. However, I have seen quite a few customers who have a generic LOV view object (mapped to one "refcodes" table) with the bind variable values set in the LOV view accessor. In such a scenario, some duplicate work is needed to get a dedicated view object instance with the correct bind variables that can be used in the iterator binding. Looking for ways to maximize reuse, wouldn't it be nice if we could just reuse our model-based LOV to populate this filter choice list? Well we can. Here are the basic steps: 1. Create an attribute list binding in the page definition that we can use to retrieve the list of SelectItems needed to populate the choice list <list StaticList="false" Uses="LOV_RegionId"               IterBinding="CountriesView1Iterator" id="RegionId"/>  We need this "current row" list binding because the implicit list binding used by the item in the table is not accessible outside a table row, we cannot use the expression #{row.bindings.RegionId} in the table filter facet. 2. Create a Map-style managed bean with the get method retrieving the list binding as key, and returning the list of SelectItems. To return this list, we take the list of selectItems contained by the list binding and replace the index number that is normally used as key value with the actual attribute value that is set by the choice list. Here is the code of the get method:  public Object get(Object key) {   if (key instanceof FacesCtrlListBinding)   {     // we need to cast to internal class FacesCtrlListBinding rather than JUCtrlListBinding to     // be able to call getItems method. To prevent this import, we could evaluate an EL expression     // to get the list of items     FacesCtrlListBinding lb = (FacesCtrlListBinding) key;     if (cachedFilterLists.containsKey(lb.getName()))     {       return cachedFilterLists.get(lb.getName());     }     List<SelectItem> items = (List<SelectItem>)lb.getItems();     if (items==null || items.size()==0)     {       return items;     }     List<SelectItem> newItems = new ArrayList<SelectItem>();     JUCtrlValueDef def = ((JUCtrlValueDef)lb.getDef());     String valueAttr = def.getFirstAttrName();     // the items list has an index number as value, we need to replace this with the actual     // value of the attribute that is copied back by the choice list     for (int i = 0; i < items.size(); i++)     {       SelectItem si = (SelectItem) items.get(i);       Object value = lb.getValueFromList(i);       if (value instanceof Row)       {         Row row = (Row) value;         si.setValue(row.getAttribute(valueAttr));                 }       else       {         // this is the "empty" row, set value to empty string so all rows will be returned         // as user no longer wants to filter on this attribute         si.setValue("");       }       newItems.add(si);     }     cachedFilterLists.put(lb.getName(), newItems);     return newItems;   }   return null; } Note that we added caching to speed up performance, and to handle the situation where table filters or search criteria are set such that no rows are retrieved in the table. When there are no rows, there is no current row and the getItems method on the list binding will return no items.  An alternative approach to create the list of SelectItems would be to retrieve the iterator binding from the list binding and loop over the rows in the iterator binding rowset. Then we wouldn't need the import of the ADF internal oracle.adfinternal.view.faces.model.binding.FacesCtrlListBinding class, but then we need to figure out the display attributes from the list binding definition, and possible separate them with a dash if multiple display attributes are defined in the LOV. Doable but less reuse and more work. 3. Inside the filter facet for the column create an af:selectOneChoice with the value property of the f:selectItems tag referencing the get method of the managed bean:  <f:facet name="filter">   <af:selectOneChoice id="soc0" autoSubmit="true"                       value="#{vs.filterCriteria.RegionId}">     <!-- attention: the RegionId list binding must be created manually in the page definition! -->                       <f:selectItems id="si0"                    value="#{viewScope.TableFilterChoiceList[bindings.RegionId]}"/>   </af:selectOneChoice> </f:facet> Note that the managed bean is defined in viewScope for the caching to take effect. Here is a screen shot of the tabe filter in action: You can download the sample application here. 

    Read the article

  • Connecting SceneBuilder edited FXML to Java code

    - by daniel
    Recently I had to answer several questions regarding how to connect an UI built with the JavaFX SceneBuilder 1.0 Developer Preview to Java Code. So I figured out that a short overview might be helpful. But first, let me state the obvious. What is FXML? To make it short, FXML is an XML based declaration format for JavaFX. JavaFX provides an FXML loader which will parse FXML files and from that construct a graph of Java object. It may sound complex when stated like that but it is actually quite simple. Here is an example of FXML file, which instantiate a StackPane and puts a Button inside it: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml"> <children> <Button mnemonicParsing="false" text="Button" /> </children> </StackPane> ... and here is the code I would have had to write if I had chosen to do the same thing programatically: import javafx.scene.control.*; import javafx.scene.layout.*; ... final Button button = new Button("Button"); button.setMnemonicParsing(false); final StackPane stackPane = new StackPane(); stackPane.setPrefWidth(200.0); stackPane.setPrefHeight(150.0); stacPane.getChildren().add(button); As you can see - FXML is rather simple to understand - as it is quite close to the JavaFX API. So OK FXML is simple, but why would I use it?Well, there are several answers to that - but my own favorite is: because you can make it with SceneBuilder. What is SceneBuilder? In short SceneBuilder is a layout tool that will let you graphically build JavaFX user interfaces by dragging and dropping JavaFX components from a library, and save it as an FXML file. SceneBuilder can also be used to load and modify JavaFX scenegraphs declared in FXML. Here is how I made the small FXML file above: Start the JavaFX SceneBuilder 1.0 Developer Preview In the Library on the left hand side, click on 'StackPane' and drag it on the content view (the white rectangle) In the Library, select a Button and drag it onto the StackPane on the content view. In the Hierarchy Panel on the left hand side - select the StackPane component, then invoke 'Edit > Trim To Selected' from the menubar That's it - you can now save, and you will obtain the small FXML file shown above. Of course this is only a trivial sample, made for the sake of the example - and SceneBuilder will let you create much more complex UIs. So, I have now an FXML file. But what do I do with it? How do I include it in my program? How do I write my main class? Loading an FXML file with JavaFX Well, that's the easy part - because the piece of code you need to write never changes. You can download and look at the SceneBuilder samples if you need to get convinced, but here is the short version: Create a Java class (let's call it 'Main.java') which extends javafx.application.Application In the same directory copy/save the FXML file you just created using SceneBuilder. Let's name it "simple.fxml" Now here is the Java code for the Main class, which simply loads the FXML file and puts it as root in a stage's scene. /* * Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. */ package simple; import java.util.logging.Level; import java.util.logging.Logger; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { /** * @param args the command line arguments */ public static void main(String[] args) { Application.launch(Main.class, (java.lang.String[])null); } @Override public void start(Stage primaryStage) { try { StackPane page = (StackPane) FXMLLoader.load(Main.class.getResource("simple.fxml")); Scene scene = new Scene(page); primaryStage.setScene(scene); primaryStage.setTitle("FXML is Simple"); primaryStage.show(); } catch (Exception ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } } Great! Now I only have to use my favorite IDE to compile the class and run it. But... wait... what does it do? Well nothing. It just displays a button in the middle of a window. There's no logic attached to it. So how do we do that? How can I connect this button to my application logic? Here is how: Connection to code First let's define our application logic. Since this post is only intended to give a very brief overview - let's keep things simple. Let's say that the only thing I want to do is print a message on System.out when the user clicks on my button. To do that, I'll need to register an action handler with my button. And to do that, I'll need to somehow get a handle on my button. I'll need some kind of controller logic that will get my button and add my action handler to it. So how do I get a handle to my button and pass it to my controller? Once again - this is easy: I just need to write a controller class for my FXML. With each FXML file, it is possible to associate a controller class defined for that FXML. That controller class will make the link between the UI (the objects defined in the FXML) and the application logic. To each object defined in FXML we can associate an fx:id. The value of the id must be unique within the scope of the FXML, and is the name of an instance variable inside the controller class, in which the object will be injected. Since I want to have access to my button, I will need to add an fx:id to my button in FXML, and declare an @FXML variable in my controller class with the same name. In other words - I will need to add fx:id="myButton" to my button in FXML: -- <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> and declare @FXML private Button myButton in my controller class @FXML private Button myButton; // value will be injected by the FXMLLoader Let's see how to do this. Add an fx:id to the Button object Load "simple.fxml" in SceneBuilder - if not already done In the hierarchy panel (bottom left), or directly on the content view, select the Button object. Open the Properties sections of the inspector (right panel) for the button object At the top of the section, you will see a text field labelled fx:id. Enter myButton in that field and validate. Associate a controller class with the FXML file Still in SceneBuilder, select the top root object (in our case, that's the StackPane), and open the Code section of the inspector (right hand side) At the top of the section you should see a text field labelled Controller Class. In the field, type simple.SimpleController. This is the name of the class we're going to create manually. If you save at this point, the FXML will look like this: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml" fx:controller="simple.SimpleController"> <children> <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> </children> </StackPane> As you can see, the name of the controller class has been added to the root object: fx:controller="simple.SimpleController" Coding the controller class In your favorite IDE, create an empty SimpleController.java class. Now what does a controller class looks like? What should we put inside? Well - SceneBuilder will help you there: it will show you an example of controller skeleton tailored for your FXML. In the menu bar, invoke View > Show Sample Controller Skeleton. A popup appears, displaying a suggestion for the controller skeleton: copy the code displayed there, and paste it into your SimpleController.java: /** * Sample Skeleton for "simple.fxml" Controller Class * Use copy/paste to copy paste this code into your favorite IDE **/ package simple; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; public class SimpleController implements Initializable { @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'simple.fxml'."; // initialize your logic here: all @FXML variables will have been injected } } Note that the code displayed by SceneBuilder is there only for educational purpose: SceneBuilder does not create and does not modify Java files. This is simply a hint of what you can use, given the fx:id present in your FXML file. You are free to copy all or part of the displayed code and paste it into your own Java class. Now at this point, there only remains to add our logic to the controller class. Quite easy: in the initialize method, I will register an action handler with my button: () { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... -- ... // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... That's it - if you now compile everything in your IDE, and run your application, clicking on the button should print a message on the console! Summary What happens is that in Main.java, the FXMLLoader will load simple.fxml from the jar/classpath, as specified by 'FXMLLoader.load(Main.class.getResource("simple.fxml"))'. When loading simple.fxml, the loader will find the name of the controller class, as specified by 'fx:controller="simple.SimpleController"' in the FXML. Upon finding the name of the controller class, the loader will create an instance of that class, in which it will try to inject all the objects that have an fx:id in the FXML. Thus, after having created '<Button fx:id="myButton" ... />', the FXMLLoader will inject the button instance into the '@FXML private Button myButton;' instance variable found on the controller instance. This is because The instance variable has an @FXML annotation, The name of the variable exactly matches the value of the fx:id Finally, when the whole FXML has been loaded, the FXMLLoader will call the controller's initialize method, and our code that registers an action handler with the button will be executed. For a complete example, take a look at the HelloWorld SceneBuilder sample. Also make sure to follow the SceneBuilder Get Started guide, which will guide you through a much more complete example. Of course, there are more elegant ways to set up an Event Handler using FXML and SceneBuilder. There are also many different ways to work with the FXMLLoader. But since it's starting to be very late here, I think it will have to wait for another post. I hope you have enjoyed the tour! --daniel

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • SQLAuthority News – Top 5 Latest Microsoft Certifications of 2013 – Guest Post

    - by Pinal Dave
    With the IT job market getting more and more competent by the day, certifications are a must for anyone who wishes to get a strong foothold in the industry. Microsoft community comes up with regular updates and enhancements in its existing products to keep up with the rapidly evolving requirements of the ICT industry. We bring you a list of five latest Microsoft certifications that you must consider acquiring this year. MCSE: SharePoint Learn all about Windows Server 2012 and Microsoft SharePoint 2013, which brings an advanced set of features to the fore in this latest version. It introduces new capabilities for business intelligence, social media, branding, search, identity management, mobile device among other features. Enjoy a great user experience with sharing and collaboration in community forum, within a pixel-perfect SharePoint website. Data connectivity and business intelligence tools allow users to process and access data, analyze reports, share and collaborate with each other more conveniently. Microsoft Specialist: Microsoft Project 2013 The only project management system that works seamlessly with other applications and cloud solutions of Microsoft, MS Project 2013 offers more than what meets the eye.  It provides for easier management and monitoring of projects so that users can ensure timely delivery while improving the productivity significantly. So keep all your projects on track and collaborate with your team like never before with this enhanced release! This one’s a must for all project managers. MCSE Messaging Another one of Microsoft gems is its messaging environment which has also launched the latest release Microsoft Exchange Server 2013. Messaging administrators can take up this training and validate their expertise in Unified Messaging, Exchange Online, PowerShell and Virtualization strategies, through MCSE Messaging certification in Exchange Server. If you wish to enhance productivity and data security of your organization while being flexible and extremely efficient, this is the right certification for you. MCSE Communication An enterprise can function optimally on the strength of its information flow and communication systems. With Lync Server 2013, you can introduce a whole new world of unified communications which consists of audio/video conferencing, dial-in, Persistent Chat, instant chat, and EDGE services in your organization. Utilize IT to serve and support business objectives by mastering this UC technology with this latest MCSE Communication course on using Microsoft Lync Server 2013. MCSE: SQL Server 2012 BI Platform The decision making process is largely influenced by underlying enterprise information used by the management for business intelligence. Therefore, a robust business intelligence platform that anchors enterprise IT and transform it to operational efficiencies is the need of the hour. SQL Server 2012 BI Platform certification helps professionals implement, manage and maintain a BI database infrastructure effectively. IT professionals with BI skills are highly sought after these days. MCSD: Windows Store Apps A Microsoft Certified Solutions Developer certification in Windows Store Apps validates your potential in designing interactive apps. Learn The Essentials of Developing Windows Store Apps using HTML5 and JavaScript and establish yourself as an ace developer capable of creating fast and fluid Metro style apps for Windows 8 that are accessible on a variety of devices. You can also go ahead and Learn Essentials of Developing Windows Store Apps using C# mode if you’re already familiar and working with C# programming language. Hence the developers are free to choose their own favorite development stream which opens doors for them to get ready for the latest and exciting application development platform called Windows store apps. Software developers with these skills are in great demand in the industry today. In order to continue being competitive in your respective fields, it is imperative that IT personnel update their knowledge on a regular basis. Certifications are a means to achieve this goal. Not considered to be an optional pre-requisite anymore, major IT certifications such as these are now essential to stay afloat in a cut-throat industry where technologies change on a daily basis. This blog is written by Aruneet Anand of Koenig Solutions. Koenig Solutions does training for all of the above courses. For more information, visit the website. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Microsoft Certifications

    Read the article

  • Access Denied

    - by Tony Davis
    When Microsoft executives wake up in the night screaming, I suspect they are having a nightmare about their own version of Frankenstein's monster. Created with the best of intentions, without thinking too hard of the long-term strategy, and having long outlived its usefulness, the monster still lives on, occasionally wreaking vengeance on the innocent. Its name is Access; a living synthesis of disparate body parts that is resistant to all attempts at a mercy-killing. In 1986, Microsoft had no database products, and needed one for their new OS/2 operating system, the successor to MSDOS. In 1986, they bought exclusive rights to Sybase DataServer, and were also intent on developing a desktop database to capture Ashton-Tate's dominance of that market, with dbase. This project, first called 'Omega' and later 'Cirrus', eventually spawned two products: Visual Basic in 1991 and Access in late 1992. Whereas Visual Basic battled with PowerBuilder for dominance in the client-server market, Access easily won the desktop database battle, with Dbase III and DataEase falling away. Access did an excellent job of abstracting and simplifying the task of building small database applications in a short amount of time, for a small number of departmental users, and often for a transient requirement. There is an excellent front end and forms generator. We not only see it in Access but parts of it also reappear in SSMS. It's good. A business user can pull together useful reports, without relying on extensive technical support. A skilled Access programmer can deliver a fairly sophisticated application, whilst the traditional client-server programmer is still sharpening his pencil. Even for the SQL Server programmer, the forms generator of Access is useful for sketching out application designs. So far, so good, but here's where the problems start; Access ties together two different products and the backend of Access is the bugbear. The limitations of Jet/ACE are well-known and documented. They range from MDB files that are prone to corruption, especially as they grow in size, pathetic security, and "copy and paste" Backups. The biggest problem though, was an infamous lack of scalability. Because Microsoft never realized how long the product would last, they put little energy into improving the beast. Microsoft 'ate their own dog food' by using Access for Microsoft Exchange and Outlook. They choked on it. For years, scalability and performance problems with Exchange Server have been laid at the door of the Jet Blue engine on which it relies. Substantial development work in Exchange 2010 was required, just in order to improve the engine and storage schema so that it more efficiently handled the reading and writing of mails. The alternative of using SQL Server just never panned out. The Jet engine was designed to limit concurrent users to a small number (10-20). When Access applications outgrew this, bitter experience proved that there really is no easy upgrade path from Access to SQL Server, beyond rewriting the whole lot from scratch. The various initiatives to do this never quite bridged the cultural gulf between Access and a true relational database So, what are the obvious alternatives for small, strategic database applications? I know many users who, for simple 'list maintenance' requirements are very happy using Excel databases. Surely, now that PowerPivot has led the way, it is time for Microsoft to offer a new RAD package for database application development; namely an Excel-based front end for SQL Server Express. In that way, we'll have a powerful and familiar front end, to a scalable database, and a clear upgrade path when an app takes off and needs to go enterprise. Cheers, Tony.

    Read the article

  • wsdl2java excpetion

    - by Daniel
    java org.apache.axis2.wsdl.WSDL2Java -s -p studs.exchange -uri https://api.betfair.com/exchange/v5/BFExchangeService.wsdl Retrieving document at 'https://api.betfair.com/exchange/v5/BFExchangeService.wsdl'. log4j:WARN No appenders could be found for logger (org.apache.axis2.description.WSDL11ToAllAxisServicesBuilder). log4j:WARN Please initialize the log4j system properly. Exception in thread "main" org.apache.axis2.wsdl.codegen.CodeGenerationException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.generate(CodeGenerationEngine.java:271) at org.apache.axis2.wsdl.WSDL2Code.main(WSDL2Code.java:35) at org.apache.axis2.wsdl.WSDL2Java.main(WSDL2Java.java:24) Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at org.apache.axis2.wsdl.codegen.extension.SimpleDBExtension.engage(SimpleDBExtension.java:53) at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.generate(CodeGenerationEngine.java:224) ... 2 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.axis2.wsdl.codegen.extension.SimpleDBExtension.engage(SimpleDBExtension.java:50) ... 3 more Caused by: java.lang.NoSuchMethodError: org.apache.ws.commons.schema.XmlSchema.getTypeByName(Ljava/lang/String;)Lorg/apache/ws/commons/schema/XmlSchemaType; at org.apache.axis2.schema.SchemaCompiler.isComponetExists(SchemaCompiler.java:2728) at org.apache.axis2.schema.SchemaCompiler.getParentSchemaFromIncludes(SchemaCompiler.java:2670) at org.apache.axis2.schema.SchemaCompiler.getParentSchemaFromIncludes(SchemaCompiler.java:2704) at org.apache.axis2.schema.SchemaCompiler.getParentSchema(SchemaCompiler.java:2644) at org.apache.axis2.schema.SchemaCompiler.processElement(SchemaCompiler.java:758) at org.apache.axis2.schema.SchemaCompiler.processElement(SchemaCompiler.java:552) at org.apache.axis2.schema.SchemaCompiler.process(SchemaCompiler.java:1991) at org.apache.axis2.schema.SchemaCompiler.processParticle(SchemaCompiler.java:1874) at org.apache.axis2.schema.SchemaCompiler.processComplexType(SchemaCompiler.java:1081) at org.apache.axis2.schema.SchemaCompiler.processAnonymousComplexSchemaType(SchemaCompiler.java:980) at org.apache.axis2.schema.SchemaCompiler.processSchema(SchemaCompiler.java:934) at org.apache.axis2.schema.SchemaCompiler.processElement(SchemaCompiler.java:592) at org.apache.axis2.schema.SchemaCompiler.processElement(SchemaCompiler.java:563) at org.apache.axis2.schema.SchemaCompiler.compile(SchemaCompiler.java:370) at org.apache.axis2.schema.SchemaCompiler.compile(SchemaCompiler.java:280) at org.apache.axis2.schema.ExtensionUtility.invoke(ExtensionUtility.java:103) ... 8 more whats is happening here? what about log4j

    Read the article

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

  • Posting messages in two RabbitMQ queue, instead of one (using py-amqp)

    - by Khelben
    I've got this strange problem using py-amqp and the Flopsy module. I have written a publisher that sends messages to a RabbitMQ server, and I wanted to be able to send it to a specified queue. On the Flopsy module that is not possible, so I tweaked it adding a parameter and a line to declare the queue on the init_ method of the Publisher object def __init__(self, routing_key=DEFAULT_ROUTING_KEY, exchange=DEFAULT_EXCHANGE, connection=None, delivery_mode=DEFAULT_DELIVERY_MODE, queue=DEFAULT_QUEUE): self.connection = connection or Connection() self.channel = self.connection.connection.channel() self.channel.queue_declare(queue) # ADDED TO SET UP QUEUE self.exchange = exchange self.routing_key = routing_key self.delivery_mode = delivery_mode The channel object is part of the py-amqplib library The problem I've got it's that, even if it's sending the messages to the specified queue, it's ALSO sending the messages to the default queue. AS in this system we expect to send quite a lot of messages, we don't want to stress the system making useless duplicates... I've tried to debug the code and go inside the py-amqplib library, but I'm not able figure out any error or lacking step. Also, I'm not able to find any documentation form py-amqplib outside the code. Any ideas on why is this happening and how to correct it?

    Read the article

  • How to exploit Diffie-hellman to perform a man in the middle attack

    - by jfisk
    Im doing a project where Alice and Bob send each other messages using the Diffie-Hellman key-exchange. What is throwing me for a loop is how to incorporate the certificate they are using in this so i can obtain their secret messages. From what I understand about MIM attakcs, the MIM acts as an imposter as seen on this diagram: Below are the details for my project. I understand that they both have g and p agreed upon before communicating, but how would I be able to implement this with they both having a certificate to verify their signatures? Alice prepares ?signA(NA, Bob), pkA, certA? where signA is the digital signature algorithm used by Alice, “Bob” is Bob’s name, pkA is the public-key of Alice which equals gx mod p encoded according to X.509 for a fixed g, p as specified in the Diffie-Hellman key- exchange and certA is the certificate of Alice that contains Alice’s public-key that verifies the signature; Finally, NA is a nonce (random string) that is 8 bytes long. Bob checks Alice's signature, and response with ?signB{NA,NB,Alice},pkB,certB?. Alice gets the message she checks her nonce NA and calculates the joint key based on pkA, pkB according to the Diffie-Hellman key exchange. Then Alice submits the message ?signA{NA,NB,Bob},EK(MA),certA? to Bob and Bobrespondswith?SignB{NA,NB,Alice},EK(MB),certB?. where MA and MB are their corresponding secret messages.

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • How to test a struts 2.1.x developer?

    - by Jason Pyeron
    We employ test to filter out those who can't. The tests are designed to be very low effort for those who can and too much effort for those who can't. Here is an example for java web application developer on an Oracle project: We only work with contractors who can use the tools we use, to determine if you can use the tools we have devised some very simple tests. Instructions If you are prepared and knowledgeable this will take you about 2-5 minutes. Suggested knowledge and tools: * subversion 1.6 see http://subversion.tigris.org/ or http://cygwin.com/setup.exe * java 1.6 see http://java.sun.com/javase/downloads/index.jsp * oracle >=10g see http://www.oracle.com/technology/software/products/jdev/index.html * j2ee server see http://tomcat.apache.org/download-55.cgi or http://www.jboss.org/jbossas/downloads/ Steps 1. check out svn://statics32.pdinc.us/home/subversion/guest 2. deploy the war file found at trunk/test.war 3. browse to the web application you installed from the war file and answer the one SQL question: How many rows are in the table 'testdata' where column 'value' ends with either an 'A' or an 'a'? The login credentials are in trunk/doc/oracle.txt 4. make a RESULTS HASH by submitting your answer to the form. 5. create a file in tmp/YourUserName.txt and put the RESULTS HASH in it, not the answer. 6. check in your file (don't forget to add the file first). 7. message me with the revision number of your check in. As such I am looking for ideas on how to test for someone to be a struts 2.1 w/ annotations.

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • Import-Pssession is not importing cmdlets when used in a custom module

    - by Douglas Plumley
    I have a PowerShell script/function that works great when I use it in my PowerShell profile or manually copy/paste the function in the PowerShell window. I'm trying to make the function accessible to other members of my team as a module. I want to have the module stored in a central place so we can all add it to our PSModulePath. Here is a copy of the basic function: Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } If I save this function in my PowerShell profile it works fine. I can dot source a *.ps1 script with this function in it and it works as well. The issue is when I save the function as a *.psm1 PowerShell script module. The function runs fine but none of the exported commands from the Import-PSSession are available. I think this may have something to do with the module scope. I'm looking for suggestions on how to get around this. EDIT When I create the following module and run Connect-O365 the imported cmdlets will not be available. $scriptblock = { Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } } New-Module -Name "Office 365" -ScriptBlock $scriptblock When I import the next module without the Connect-O365 function the imported cmdlets are available. $scriptblock = { $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } New-Module -Name "Office 365" -ScriptBlock $scriptblock This appears to be a scope issue of some sort, just not sure how to get around it.

    Read the article

  • Duplicate C# web service proxy classes generated for Java types

    - by Sergey
    My question is about integration between a Java web service and a C# .NET client. Service: CXF 2.2.3 with Aegis databinding Client: C#, .NET 3.5 SP1 For some reason Visual Studio generates two C# proxy enums for each Java enum. The generated C# classes do not compile. For example, this Java enum: public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE; } Produces this WSDL: <xsd:simpleType name="SqlDialect"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="GENERIC" /> <xsd:enumeration value="SYBASE" /> <xsd:enumeration value="SQL_SERVER" /> <xsd:enumeration value="ORACLE" /> </xsd:restriction> </xsd:simpleType> For this WSDL Visual Studio generates two partial C# classes (generated comments removed): [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Runtime.Serialization.DataContractAttribute(Name="SqlDialect", Namespace="http://somenamespace")] public enum SqlDialect : int { [System.Runtime.Serialization.EnumMemberAttribute()] GENERIC = 0, [System.Runtime.Serialization.EnumMemberAttribute()] SYBASE = 1, [System.Runtime.Serialization.EnumMemberAttribute()] SQL_SERVER = 2, [System.Runtime.Serialization.EnumMemberAttribute()] ORACLE = 3, } [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://somenamespace")] public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE, } The resulting C# code does not compile: The namespace 'somenamespace' already contains a definition for 'SqlDialect' I will appreciate any ideas...

    Read the article

  • Scripting with the Sun ZFS Storage 7000 Appliance

    - by Geoff Ongley
    The Sun ZFS Storage 7000 appliance has a user friendly and easy to understand graphical web based interface we call the "BUI" or "Browser User Interface".This interface is very useful for many tasks, but in some cases a script (or workflow) may be more appropriate, such as:Repetitive tasksTasks which work on (or obtain information about) a large number of shares or usersTasks which are triggered by an alert threshold (workflows)Tasks where you want a only very basic input, but a consistent output (workflows)The appliance scripting language is based on ECMAscript 3 (close to javascript). I'm not going to cover ECMAscript 3 in great depth (I'm far from an expert here), but I would like to show you some neat things you can do with the appliance, to get you started based on what I have found from my own playing around.I'm making the assumption you have some sort of programming background, and understand variables, arrays, functions to some extent - but of course if something is not clear, please let me know so I can fix it up or clarify it.Variable Declarations and ArraysVariablesECMAScript is a dynamically and weakly typed language. If you don't know what that means, google is your friend - but at a high level it means we can just declare variables with no specific type and on the fly.For example, I can declare a variable and use it straight away in the middle of my code, for example:projects=list();Which makes projects an array of values that are returned from the list(); function (which is usable in most contexts). With this kind of variable, I can do things like:projects.length (this property on array tells you how many objects are in it, good for for loops etc). Alternatively, I could say:projects=3;and now projects is just a simple number.Should we declare variables like this so loosely? In my opinion, the answer is no - I feel it is a better practice to declare variables you are going to use, before you use them - and given them an initial value. You can do so as follows:var myVariable=0;To demonstrate the ability to just randomly assign and change the type of variables, you can create a simple script at the cli as follows (bold for input):fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("Number of projects is: %d\n",projects.length);("." to run)> projects=152;("." to run)> printf("Value of the projects variable as an integer is now: %d\n",projects);("." to run)> .Number of projects is: 7Value of the projects variable as an integer is now: 152You can also confirm this behaviour by checking the typeof variable we are dealing with:fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> projects=152;("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> .var projects is of type objectvar projects is of type numberArraysSo you likely noticed that we have already touched on arrays, as the list(); (in the shares context) stored an array into the 'projects' variable.But what if you want to declare your own array? Easy! This is very similar to Java and other languages, we just instantiate a brand new "Array" object using the keyword new:var myArray = new Array();will create an array called "myArray".A quick example:fishy10:> script("." to run)> testArray = new Array();("." to run)> testArray[0]="This";("." to run)> testArray[1]="is";("." to run)> testArray[2]="just";("." to run)> testArray[3]="a";("." to run)> testArray[4]="test";("." to run)> for (i=0; i < testArray.length; i++)("." to run)> {("." to run)>    printf("Array element %d is %s\n",i,testArray[i]);("." to run)> }("." to run)> .Array element 0 is ThisArray element 1 is isArray element 2 is justArray element 3 is aArray element 4 is testWorking With LoopsFor LoopFor loops are very similar to those you will see in C, java and several other languages. One of the key differences here is, as you were made aware earlier, we can be a bit more sloppy with our variable declarations.The general way you would likely use a for loop is as follows:for (variable; test-case; modifier for variable){}For example, you may wish to declare a variable i as 0; and a MAX_ITERATIONS variable to determine how many times this loop should repeat:var i=0;var MAX_ITERATIONS=10;And then, use this variable to be tested against some case existing (has i reached MAX_ITERATIONS? - if not, increment i using i++);for (i=0; i < MAX_ITERATIONS; i++){ // some work to do}So lets run something like this on the appliance:fishy10:> script("." to run)> var i=0;("." to run)> var MAX_ITERATIONS=10;("." to run)> for (i=0; i < MAX_ITERATIONS; i++)("." to run)> {("." to run)>    printf("The number is %d\n",i);("." to run)> }("." to run)> .The number is 0The number is 1The number is 2The number is 3The number is 4The number is 5The number is 6The number is 7The number is 8The number is 9While LoopWhile loops again are very similar to other languages, we loop "while" a condition is met. For example:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n",counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Counter is 10Loop has ended and Counter is 11So what do we notice here? Something has actually gone wrong - counter will technically be 11 once the loop completes... Why is this?Well, if we have a loop like this, where the 'while' condition that will end the loop may be set based on some other condition(s) existing (such as the counter has reached 10) - we must ensure that we  terminate this iteration of the loop when the condition is met - otherwise the rest of the code will be followed which may not be desirable. In other words, like in other languages, we will only ever check the loop condition once we are ready to perform the next iteration, so any other code after we set "isTen" to be true, will still be executed as we can see it was above.We can avoid this by adding a break into our loop once we know we have set the condition - this will stop the rest of the logic being processed in this iteration (and as such, counter will not be incremented). So lets try that again:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>            break;("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n", counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Loop has ended and Counter is 10Much better!Methods to Obtain and Manipulate DataGet MethodThe get method allows you to get simple properties from an object, for example a quota from a user. The syntax is fairly simple:var myVariable=get('property');An example of where you may wish to use this, is when you are getting a bunch of information about a user (such as quota information when in a shares context):var users=list();for(k=0; k < users.length; k++){     user=users[k];     run('select ' + user);     var username=get('name');     var usage=get('usage');     var quota=get('quota');...Which you can then use to your advantage - to print or manipulate infomation (you could change a user's information with a set method, based on the information returned from the get method). The set method is explained next.Set MethodThe set method can be used in a simple manner, similar to get. The syntax for set is:set('property','value'); // where value is a string, if it was a number, you don't need quotesFor example, we could set the quota on a share as follows (first observing the initial value):fishy10:shares default/test-geoff> script("." to run)> var currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> set('quota','30G');("." to run)> run('commit');("." to run)> currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> .Current Quota is: 0Current Quota is: 32212254720This shows us using both the get and set methods as can be used in scripts, of course when only setting an individual share, the above is overkill - it would be much easier to set it manually at the cli using 'set quota=3G' and then 'commit'.List MethodThe list method can be very powerful, especially in more complex scripts which iterate over large amounts of data and manipulate it if so desired. The general way you will use list is as follows:var myVar=list();Which will make "myVar" an array, containing all the objects in the relevant context (this could be a list of users, shares, projects, etc). You can then gather or manipulate data very easily.We could list all the shares and mountpoints in a given project for example:fishy10:shares another-project> script("." to run)> var shares=list();("." to run)> for (i=0; i < shares.length; i++)("." to run)> {("." to run)>    run('select ' + shares[i]);("." to run)>    var mountpoint=get('mountpoint');("." to run)>    printf("Share %s discovered, has mountpoint %s\n",shares[i],mountpoint);("." to run)>    run('done');("." to run)> }("." to run)> .Share and-another discovered, has mountpoint /export/another-project/and-anotherShare another-share discovered, has mountpoint /export/another-project/another-shareShare bob discovered, has mountpoint /export/another-projectShare more-shares-for-all discovered, has mountpoint /export/another-project/more-shares-for-allShare yep discovered, has mountpoint /export/another-project/yepWriting More Complex and Re-Usable CodeFunctionsThe best way to be able to write more complex code is to use functions to split up repeatable or reusable sections of your code. This also makes your more complex code easier to read and understand for other programmers.We write functions as follows:function functionName(variable1,variable2,...,variableN){}For example, we could have a function that takes a project name as input, and lists shares for that project (assuming we're already in the 'project' context - context is important!):function getShares(proj){        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                printf("Discovered share: %s\n",shares[i]);        }        run('done'); // exit selected project}Commenting your CodeLike any other language, a large part of making it readable and understandable is to comment it. You can use the same comment style as in C and Java amongst other languages.In other words, sngle line comments use://at the beginning of the comment.Multi line comments use:/*at the beginning, and:*/ at the end.For example, here we will use both:fishy10:> script("." to run)> // This is a test comment("." to run)> printf("doing some work...\n");("." to run)> /* This is a multi-line("." to run)> comment which I will span across("." to run)> three lines in total */("." to run)> printf("doing some more work...\n");("." to run)> .doing some work...doing some more work...Your comments do not have to be on their own, they can begin (particularly with single line comments this is handy) at the end of a statement, for examplevar projects=list(); // The variable projects is an array containing all projects on the system.Try and Catch StatementsYou may be used to using try and catch statements in other languages, and they can (and should) be utilised in your code to catch expected or unexpected error conditions, that you do NOT wish to stop your code from executing (if you do not catch these errors, your script will exit!):try{  // do some work}catch(err) // Catch any error that could occur{ // do something here under the error condition}For example, you may wish to only execute some code if a context can be reached. If you can't perform certain actions under certain circumstances, that may be perfectly acceptable.For example if you want to test a condition that only makes sense when looking at a SMB/NFS share, but does not make sense when you hit an iscsi or FC LUN, you don't want to stop all processing of other shares you may not have covered yet.For example we may wish to obtain quota information on all shares for all users on a share (but this makes no sense for a LUN):function getShareQuota(shar) // Get quota for each user of this share{        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","----");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}Running Scripts Remotely over SSHAs you have likely noticed, writing and running scripts for all but the simplest jobs directly on the appliance is not going to be a lot of fun.There's a couple of choices on what you can do here:Create scripts on a remote system and run them over sshCreate scripts, wrapping them in workflow code, so they are stored on the appliance and can be triggered under certain circumstances (like a threshold being reached)We'll cover the first one here, and then cover workflows later on (as these are for the most part just scripts with some wrapper information around them).Creating a SSH Public/Private SSH Key PairLog on to your handy Solaris box (You wouldn't be using any other OS, right? :P) and use ssh-keygen to create a pair of ssh keys. I'm storing this separate to my normal key:[geoff@lightning ~] ssh-keygen -t rsa -b 1024Generating public/private rsa key pair.Enter file in which to save the key (/export/home/geoff/.ssh/id_rsa): /export/home/geoff/.ssh/nas_key_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /export/home/geoff/.ssh/nas_key_rsa.Your public key has been saved in /export/home/geoff/.ssh/nas_key_rsa.pub.The key fingerprint is:7f:3d:53:f0:2a:5e:8b:2d:94:2a:55:77:66:5c:9b:14 geoff@lightningInstalling the Public Key on the ApplianceOn your Solaris host, observe the public key:[geoff@lightning ~] cat .ssh/nas_key_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc= geoff@lightningNow, copy and paste everything after "ssh-rsa" and before "user@hostname" - in this case, geoff@lightning. That is, this bit:AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc=Logon to your appliance and get into the preferences -> keys area for this user (root):[geoff@lightning ~] ssh [email protected]: Last login: Mon Dec  6 17:13:28 2010 from 192.168.0.2fishy10:> configuration usersfishy10:configuration users> select rootfishy10:configuration users root> preferences fishy10:configuration users root preferences> keysOR do it all in one hit:fishy10:> configuration users select root preferences keysNow, we create a new public key that will be accepted for this user and set the type to RSA:fishy10:configuration users root preferences keys> createfishy10:configuration users root preferences key (uncommitted)> set type=RSASet the key itself using the string copied previously (between ssh-rsa and user@host), and set the key ensuring you put double quotes around it (eg. set key="<key>"):fishy10:configuration users root preferences key (uncommitted)> set key="AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc="Now set the comment for this key (do not use spaces):fishy10:configuration users root preferences key (uncommitted)> set comment="LightningRSAKey" Commit the new key:fishy10:configuration users root preferences key (uncommitted)> commitVerify the key is there:fishy10:configuration users root preferences keys> lsKeys:NAME     MODIFIED              TYPE   COMMENT                                  key-000  2010-10-25 20:56:42   RSA    cycloneRSAKey                           key-001  2010-12-6 17:44:53    RSA    LightningRSAKey                         As you can see, we now have my new key, and a previous key I have created on this appliance.Running your Script over SSH from a Remote SystemHere I have created a basic test script, and saved it as test.ecma3:[geoff@lightning ~] cat test.ecma3 script// This is a test script, By Geoff Ongley 2010.printf("Testing script remotely over ssh\n");.Now, we can run this script remotely with our keyless login:[geoff@lightning ~] ssh -i .ssh/nas_key_rsa root@fishy10 < test.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Testing script remotely over sshPutting it Together - An Example Completed Quota Gathering ScriptSo now we have a lot of the basics to creating a script, let us do something useful, like, find out how much every user is using, on every share on the system (you will recognise some of the code from my previous examples): script/************************************** Quick and Dirty Quota Check script ** Written By Geoff Ongley            ** 25 October 2010                    **************************************/function getUserQuota(usr){        run('select ' + usr);        var username=get('name');        var usage=get('usage');        var quota=get('quota');        var usage_g=usage / 1073741824; // convert bytes to gigabytes        var quota_g=quota / 1073741824; // as above        var quota_percent=0        if (quota > 0)        {                quota_percent=(usage / quota)*(100/1);        }        printf("    %20s        %8.2f           %8.2f           %d%%\n",username,usage_g,quota_g,quota_percent);        run('done'); // done with this selected user}function getShareQuota(shar){        //printf("DEBUG: selecting share %s\n", shar);        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","--------");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}function getShares(proj){        //printf("DEBUG: selecting project %s\n",proj);        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                share=shares[j];                getShareQuota(share);        }        run('done'); // exit selected project}function getProjects(){        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                var project=projects[i];                getShares(project);        }        run('done'); // exit context for all projects}getProjects();.Which can be run as follows, and will print information like this:[geoff@lightning ~/FISHWORKS_SCRIPTS] ssh -i ~/.ssh/nas_key_rsa root@fishy10 < get_quota_utilisation.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Project: another-project  SHARE: and-another                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                 geoffro            0.05            0.00        0%                   Billy            0.10            0.00        0%                    root            0.00            0.00        0%            testing-user            0.05            0.00        0%  SHARE: another-share                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                 geoffro            0.05            0.49        9%            testing-user            0.05            0.02        249%                   Billy            0.10            0.29        33%  SHARE: bob                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%  SHARE: more-shares-for-all                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                   Billy            0.10            0.00        0%            testing-user            0.05            0.00        0%                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%                 geoffro            0.05            0.00        0%  SHARE: yep                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                   Billy            0.10            0.01        999%            testing-user            0.05            0.49        9%                 geoffro            0.05            0.00        0%Project: default  SHARE: Test-LUN    SKIPPING Test-LUN - This is NOT a NFS or CIFs share, not looking for users  SHARE: test-geoff                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                 geoffro            0.05            0.00        0%                    root            3.18           10.00        31%                    uucp            0.00            0.00        0%                  nobody            0.59            0.49        119%^CKilled by signal 2.Creating a WorkflowWorkflows are scripts that we store on the appliance, and can have the script execute either on request (even from the BUI), or on an event such as a threshold being met.Workflow BasicsA workflow allows you to create a simple process that can be executed either via the BUI interface interactively, or by an alert being raised (for some threshold being reached, for example).The basics parameters you will have to set for your "workflow object" (notice you're creating a variable, that embodies ECMAScript) are as follows (parameters is optional):name: A name for this workflowdescription: A Description for the workflowparameters: A set of input parameters (useful when you need user input to execute the workflow)execute: The code, the script itself to execute, which will be function (parameters)With parameters, you can specify things like this (slightly modified sample taken from the System Administration Guide):          ...parameters:        variableParam1:         {                             label: 'Name of Share',                             type: 'String'                  },                  variableParam2                  {                             label: 'Share Size',                             type: 'size'                  },execute: ....};  Note the commas separating the sections of name, parameters, execute, and so on. This is important!Also - there is plenty of properties you can set on the parameters for your workflow, these are described in the Sun ZFS Storage System Administration Guide.Creating a Basic Workflow from a Basic ScriptTo make a basic script into a basic workflow, you need to wrap the following around your script to create a 'workflow' object:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function() {// (basic script goes here, minus the "script" at the beginning, and "." at the end)}};However, it appears (at least in my experience to date) that the workflow object may only be happy with one function in the execute parameter - either that or I'm doing something wrong. As far as I can tell, after execute: you should only have a basic one function context like so:execute: function(){}To deal with this, and to give an example similar to our script earlier, I have created another simple quota check, to show the same basic functionality, but in a workflow format:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function () {        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                run('select ' + projects[i]);                shares=list('filesystem');                printf("Project: %s\n", projects[i]);                for(j=0; j < shares.length; j++)                {                        run('select ' +shares[j]);                        try                        {                                run('users');                                printf("  SHARE: %s\n", shares[j]);                                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","-------");                                users=list();                                for(k=0; k < users.length; k++)                                {                                        run('select ' + users[k]);                                        username=get('name');                                        usage=get('usage');                                        quota=get('quota');                                        usage_g=usage / 1073741824; // convert bytes to gigabytes                                        quota_g=quota / 1073741824; // as above                                        quota_percent=0                                        if (quota > 0)                                        {                                                quota_percent=(usage / quota)*(100/1);                                        }                                        printf("    %20s        %8.2f   %8.2f   %d%%\n",username,usage_g,quota_g,quota_percent);                                        run('done');                                }                                run('done'); // exit user context                        }                        catch(err)                        {                        //      printf("    %s is a LUN, Not looking for users\n", shares[j]);                        }                        run('done'); // exit selected share context                }                run('done'); // exit project context        }        }};SummaryThe Sun ZFS Storage 7000 Appliance offers lots of different and interesting features to Sun/Oracle customers, including the world renowned Analytics. Hopefully the above will help you to think of new creative things you could be doing by taking advantage of one of the other neat features, the internal scripting engine!Some references are below to help you continue learning more, I'll update this post as I do the same! Enjoy...More information on ECMAScript 3A complete reference to ECMAScript 3 which will help you learn more of the details you may be interested in, can be found here:http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdfMore Information on Administering the Sun ZFS Storage 7000The Sun ZFS Storage 7000 System Administration guide can be a useful reference point, and can be found here:http://wikis.sun.com/download/attachments/186238602/2010_Q3_2_ADMIN.pdf

    Read the article

  • Can't install Perl DBD module on Ubuntu (for Bugzilla)

    - by Mara
    Trying to install bugzilla-4.2.2 on Ubuntu 12.04. When I run checksetup.pl I get the following error: YOU MUST RUN ONE OF THE FOLLOWING COMMANDS (depending on which database you use): PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql SQLite: /usr/bin/perl install-module.pl DBD::SQLite Oracle: /usr/bin/perl install-module.pl DBD::Oracle To attempt an automatic install of every required and optional module with one command, do: /usr/bin/perl install-module.pl --all I have installed MySQL via XAMPP so I run: /urs/bin/perl install-module.pl DBD::mysql And get the following error: perl Makefile.PL --testuser=username Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Can't exec "mysql_config": No such file or directory at Makefile.PL line 479. Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located Failed to determine directory of mysql.h. Use perl Makefile.PL --cflags=-I<dir> to set this directory. For details see the INSTALL.html file, section "C Compiler flags" or type perl Makefile.PL --help Warning: No success on command[/usr/bin/perl Makefile.PL LIB="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib" INSTALLMAN1DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man1" INSTALLMAN3DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man3" INSTALLBIN="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLSCRIPT="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLDIRS=perl] CAPTTOFU/DBD-mysql-4.021.tar.gz /usr/bin/perl Makefile.PL LIB="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib" INSTALLMAN1DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man1" INSTALLMAN3DIR="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/man/man3" INSTALLBIN="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLSCRIPT="/opt/lampp/htdocs/bugzilla/4.2.2/bugzilla-4.2.2/lib/bin" INSTALLDIRS=perl -- NOT OK Skipping test because of notest pragma Running make install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites So then I tried checksetup.pl's suggestion and ran: /usr/bin/perl install-module.pl --all And it seems to have installed DBD::SQLite without any problems, but again I see a warning that says it's skipping tests because of notest pragma. When I re-run checksetup.pl It shows 3 of the 4 original DB drivers in the "not found" list: PostgreSQL: /usr/bin/perl install-module.pl DBD::Pg MySQL: /usr/bin/perl install-module.pl DBD::mysql Oracle: /usr/bin/perl install-module.pl DBD::Oracle So running it with --all seems to have installed the SQLite driver without any problems, but for some reason I can't seem to install the MySQL driver. Again I need MySQL because thats what XAMPP uses and because I prefer MySQL regardless. I have a feeling it has something to do with this notest pragma error. Any ideas? Thanks in advance!

    Read the article

  • Ldap ssh authentication is super slow... any way to speed it up?

    - by Johnathon
    I am running OpenSUSE. Here is the output of ssh -vvv: OpenSSH_5.8p1, OpenSSL 1.0.0c 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <ipaddress> [ipaddress] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/root/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ipaddress" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 138/256 debug2: bits set: 529/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA cb:7f:ff:2e:65:28:f0:95:e6:8a:71:24:2a:67:02:2b debug3: load_hostkeys: loading entries for host "<ipaddress>" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug1: Host '<ipaddress>' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/id_rsa (0xb789d5c8) debug2: key: /root/.ssh/id_dsa ((nil)) debug2: key: /root/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey,keyboard-interactive debug3: start over, passed a different list publickey,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply It hangs here for a good 30 seconds to a minute then debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug1: Trying private key: /root/.ssh/id_ecdsa debug3: no such identity: /root/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 I added PubkeyAuthentication no to the /etc/ssh/ssh_config and the /etc/ssh/sshd_config which makes it faster getting to the password prompt, but the password prompt still takes some time. Any way to fix that? Here is where the password hangs debug3: packet_send2: adding 32 (len 25 padlen 7 extra_pad 64) debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 0 debug3: packet_send2: adding 48 (len 10 padlen 6 extra_pad 64) debug1: Authentication succeeded (keyboard-interactive). Authenticated to ipaddress ([ipaddress]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. FIXED!!!!!!!!!!!!!! What is did... In the nsswitch_conf I had ldap included in the group and passwd which slows it down a lot. Thank you everybody for your input passwd: compat group: files hosts: files dns networks: files dns

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • Unable to SSH into EC2 instance on Fedora 17

    - by abhishek
    I did following steps But I am not able to SSH to it(Same steps work fine on Fedora 14 image). I am getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic) I created new instance using fedora 17 amazon community image(ami-2ea50247). I copied my ssh keys under /home/usertest/.ssh/ after creating a usertest I have SELINUX=disabled here is Debug info: $ ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com OpenSSH_5.2p1, OpenSSL 1.0.0b-fips 16 Nov 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-243-101-41.compute-1.amazonaws.com [54.243.101.41] port 22. debug1: Connection established. debug1: identity file /home/usertest/.ssh/identity type -1 debug1: identity file /home/usertest/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/usertest/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/usertest/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 131/256 debug2: bits set: 506/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug1: Host 'ec2-54-243-101-41.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/usertest/.ssh/known_hosts:17 debug2: bits set: 500/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/usertest/.ssh/identity ((nil)) debug2: key: /home/usertest/.ssh/id_rsa ((nil)) debug2: key: /home/usertest/.ssh/id_dsa (0x7f904b5ae260) debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug3: Trying to reverse map address 54.243.101.41. debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/usertest/.ssh/identity debug3: no such identity: /home/usertest/.ssh/identity debug1: Trying private key: /home/usertest/.ssh/id_rsa debug3: no such identity: /home/usertest/.ssh/id_rsa debug1: Offering public key: /home/usertest/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Moving data files failing

    - by Miles Hayler
    Trying to migrate data from C: to D: via the SBS console is failing. The wizard starts running but drops out in the first few seconds. I'll post the full logs, but the important lines appear to be as follows: An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task --- ErrorCode:0 I've been googling for days with no luck. I have found that mscorlib is a component of .net, and I've discovered multiple instances of the file in %windir%, %windir%\winsxs, %windir%\Microsoft.net Anyone come across and fixed this one before? --------------------------------------------------------- [1516] 110315.190856.1105: Storage: Initializing...C:\Program Files\Windows Small Business Server\Bin\MoveData.exe [1516] 110315.190856.2875: Storage: Data Store to be moved: Exchange [1516] 110315.190856.5305: TaskScheduler: Exception System.IO.FileNotFoundException: [1516] 110315.190856.5605: Exception: --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5625: Storage: Exception Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: [1516] 110315.190856.5635: Exception: --------------------------------------- [b]An exception of type 'Type: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException, Common, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' has occurred.[/b] Timestamp: 03/15/2011 19:08:56 Message: Failed to find the task path Stack: at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5665: Storage: Error Retrieving Server Backup Task Status: ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: Failed to find the task path ---> System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() at Microsoft.WindowsServerSolutions.Storage.MoveData.Helper.get_ServerBackupTaskState() [1516] 110315.190857.6216: Storage: Backup Task State: Unknown [1516] 110315.190857.9347: Storage: Launching the Move Data Wizard! [1516] 110315.190857.9397: Wizard: Admin:QueryNextPage(null) = Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190857.9417: Wizard: TOC Storage.MoveDataWizard.GettingStartedPage is on ExpectedPath [1516] 110315.190857.9577: Wizard: Storage.MoveDataWizard.GettingStartedPage entered [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.DiagnoseDataStorePage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.NewDataStoreLocationPage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190857.9697: Wizard: ---------------------------------- [1516] 110315.190857.9697: Wizard: The pages visted: [1516] 110315.190857.9697: Wizard: Current Page := [TOC Storage.MoveDataWizard.GettingStartedPage] [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9697: Wizard: Step 1 of 3 [1516] 110315.190907.0406: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0416: Wizard: Storage.MoveDataWizard.GettingStartedPage exited with the button: Next [1516] 110315.190907.0416: WizardChainEngine Next Clicked: Going to page {0}.: Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0496: Wizard: Storage.MoveDataWizard.DiagnoseDataStorePage entered [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190907.0606: Wizard: ---------------------------------- [1516] 110315.190907.0606: Wizard: The pages visted: [1516] 110315.190907.0606: Wizard: [TOC] visited: TOC Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190907.0606: Wizard: Current Page := [TOC Storage.MoveDataWizard.DiagnoseDataStorePage] [1516] 110315.190907.0616: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0616: Wizard: Step 2 of 3 [19772] 110315.190907.0656: Storage: Starting System Diagnosis [19772] 110315.190907.0656: Storage: Getting Data Store Information [19772] 110315.190907.1086: Storage: Create the list of storage and DB directory path [19772] 110315.190907.1246: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks..ctor [19772] 110315.190907.1546: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190907.1596: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190907.1606: Messaging: Exchange install path: C:\Program Files\Microsoft\Exchange Server\bin [19772] 110315.190908.4157: Messaging: E12 Monad runspace created ID: Microsoft.PowerShell [19772] 110315.190908.4237: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190908.4287: Messaging: Executed management shell command: get-exchangeserver [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.5719: Messaging: Executed management shell command: get-user -Identity "dmagroup.local\Administrator" [19772] 110315.190911.0870: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Executed management shell command: get-mailbox -Identity "d2ae2bf0-48a7-4ce9-9e72-bb3c765454ac" [19772] 110315.190911.1300: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1310: Messaging: User Administrator is mail enabled and can use MessagingManagement to send mail. [19772] 110315.190911.1310: Messaging: Email address used for user: [email protected] [19772] 110315.190911.1440: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1440: Messaging: Executed management shell command: get-group -Identity "Domain Admins" [19772] 110315.190911.1630: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1640: Messaging: User Administrator is a member of Domain Admins and can use MessagingManagement to manage Exchange. [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for Exchange management: True [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for mail submission: True [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Tasks.TaskMoveExchangeData.CreateDataStoreDriveList [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1670: Messaging: Executed management shell command: get-storagegroup -Server "SERVER01" [19772] 110315.190911.2990: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Executed management shell command: get-mailboxdatabase -Server "SERVER01" [19772] 110315.190911.4440: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Executed management shell command: get-publicfolderdatabase -Server "SERVER01" [19772] 110315.190911.5240: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.5510: Storage: Data Store Drive/s Details:Name=C:\,Size=12675712420 [19772] 110315.190911.5510: Storage: Data Store Size Details: Current Total Size=12675712420 Required Size=12675712420 [19772] 110315.190911.5510: Storage: MoveData Task can move the Data Store=True [19772] 110315.190911.8401: Storage: An error was encountered when performing system diagnosis : ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) [1516] 110315.190912.0331: Storage: An error occured during the execution: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: Diagnosing the Data Store failed (see the inner exception) ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.backgroundWorker_RunWorkerCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardFrameView.Create() at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.LaunchMoveDataWizard() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.Main(String[] args)

    Read the article

  • Why doesn't the People Pane in Outlook 2010 show appointments for individual contacts?

    - by Curtis
    Outlook 2010 will not show appointments in the people pane. Under the activities tab if I go to All Items then eventually the appointments will show, but this takes a seriously long time. If I then click off the tab to look at another field then return to All Items, all the appointments are gone again. I need to be able to: Open a contact and see when that contact has appointments Open an appointment and see which contacts are attached to that appointment It works well from the appointment card to the contact, but has me completely frustrated going from the contact card to find the appontments. I have tried many things but cannot solve this problem. My set up is as follows: Exchange Server Windows 7 Ultimate Indexing enabled Cached exchange mode enabled Help! This is the whole reason I installed Outlook 2010.

    Read the article

  • SSH Login to an EC2 instance failing with previously working keys...

    - by Matthew Savage
    We recently had an issues where I had rebooted our EC2 instance (Ubuntu x86_64, version 9.10 server) and due to an EC2 issue the instance needed to be stopped and was down for a few days. Now I have been able to bring the instance back online I cannot connect to SSH using the keypair which previously worked. Unfortunately SSH is the only way to get into this server, and while I have another system running in its place there are a number of things I would like to try and retrieve from the machine. Running SSH in verbose mode yields the following: [Broc-MBP.local]: Broc:~/.ssh ? ssh -i ~/.ssh/EC2Keypair.pem -l ubuntu ec2-xxx.compute-1.amazonaws.com -vvv OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /Users/Broc/.ssh/config debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-xxx.compute-1.amazonaws.com [184.73.109.130] port 22. debug1: Connection established. debug3: Not a RSA1 key file /Users/Broc/.ssh/EC2Keypair.pem. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /Users/Broc/.ssh/EC2Keypair.pem type -1 debug3: Not a RSA1 key file /Users/Broc/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /Users/Broc/.ssh/id_rsa type 1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 123/256 debug2: bits set: 500/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /Users/Broc/.ssh/known_hosts debug3: check_host_in_hostfile: match line 106 debug3: check_host_in_hostfile: filename /Users/Broc/.ssh/known_hosts debug3: check_host_in_hostfile: match line 106 debug1: Host 'ec2-xxx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/Broc/.ssh/known_hosts:106 debug2: bits set: 521/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/Broc/.ssh/id_rsa (0x100125f70) debug2: key: /Users/Broc/.ssh/EC2Keypair.pem (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /Users/Broc/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/Broc/.ssh/EC2Keypair.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). [Broc-MBP.local]: Broc:~/.ssh ? So, right now I'm really at a loss and not sure what to do. While I've already got another system taking the place of this one I'd really like to have access back :|

    Read the article

  • sendmail rules for filtering spam

    - by user71061
    Hi! Can anyone help me with constructing sendmail rules for limiting spam? Assuming that name of my domain is my.domain.com, I want to use following rules: If BOTH sender and recipient address is from my.domain.com, message should be rejected (sendmail server only relays messages between my internal exchange server and outside word, so sending messages between users from my.domain.com always occour on exchange server and never on sendmail server) If recipient list contains AT LAST ONE invalid address, whole message should be rejected (even for valid recipients addresses) If sending server uses HELO message with bogus domain name (other than domain of this server), message should be rejected Any server attempting to send mail to dedicated address (f.e. [email protected]), should be automatically blacklisted Any other suggested rules ...

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >