Search Results

Search found 3479 results on 140 pages for 'sequence diagram'.

Page 46/140 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Searching for tasks with code – Executables and Event Handlers

    Searching packages or just enumerating through all tasks is not quite as straightforward as it may first appear, mainly because of the way you can nest tasks within other containers. You can see this illustrated in the sample package below where I have used several sequence containers and loops. To complicate this further all containers types, including packages and tasks, can have event handlers which can then support the full range of nested containers again. Towards the lower right, the task called SQL In FEL also has an event handler not shown, within which is another Execute SQL Task, so that makes a total of 6 Execute SQL Tasks 6 tasks spread across the package. In my previous post about such as adding a property expressionI kept it simple and just looked at tasks at the package level, but what if you wanted to find any or all tasks in a package? For this post I've written a console program that will search a package looking at all tasks no matter how deeply nested, and check to see if the name starts with "SQL". When it finds a matching task it writes out the hierarchy by name for that task, starting with the package and working down to the task itself. The output for our sample package is shown below, note it has found all 6 tasks, including the one on the OnPreExecute event of the SQL In FEL task TaskSearch v1.0.0.0 (1.0.0.0) Copyright (C) 2009 Konesans Ltd Processing File - C:\Projects\Alpha\Packages\MyPackage.dtsx MyPackage\FOR Counter Loop\SQL In Counter Loop MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL\OnPreExecute\SQL On Pre Execute for FEL SQL Task MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SEQ Nested Lvl 2\SQL In Nested Lvl 2 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #1 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #2 6 matching tasks found in package. The full project and code is available for download below, but first we can walk through the project to highlight the most important sections of code. This code has been abbreviated for this description, but is complete in the download. First of all we load the package, and then start by looking at the Executables for the package. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { int matchCount = 0; // Look in the package's executables ProcessExecutables(package.Executables, ref matchCount); ... // // ... // Write out final count Console.WriteLine("{0} matching tasks found in package.", matchCount); } The ProcessExecutables method is a key method, as an executable could be described as the the highest level of a working functionality or container. There are several of types of executables, such as tasks, or sequence containers and loops. To know what to do next we need to work out what type of executable we are dealing with as the abbreviated version of method shows below. private static void ProcessExecutables(Executables executables, ref int matchCount) { foreach (Executable executable in executables) { TaskHost taskHost = executable as TaskHost; if (taskHost != null) { ProcessTaskHost(taskHost, ref matchCount); ProcessEventHandlers(taskHost.EventHandlers, ref matchCount); continue; } ... // // ... ForEachLoop forEachLoop = executable as ForEachLoop; if (forEachLoop != null) { ProcessExecutables(forEachLoop.Executables, ref matchCount); ProcessEventHandlers(forEachLoop.EventHandlers, ref matchCount); continue; } } } As you can see if the executable we find is a task we then call out to our ProcessTaskHost method. As with all of our executables a task can have event handlers which themselves contain more executables such as task and loops, so we also make a call out our ProcessEventHandlers method. The other types of executables such as loops can also have event handlers as well as executables. As shown with the example for the ForEachLoop we call the same ProcessExecutables and ProcessEventHandlers methods again to drill down into the hierarchy of objects that the package may contain. This code needs to explicitly check for each type of executable (TaskHost, Sequence, ForLoop and ForEachLoop) because whilst they all have an Executables property this is not from a common base class or interface. This example was just a simple find a task by its name, so ProcessTaskHost really just does that. We also get the hierarchy of objects so we can write out for information, obviously you can adapt this method to do something more interesting such as adding a property expression. private static void ProcessTaskHost(TaskHost taskHost, ref int matchCount) { if (taskHost == null) { return; } // Check if the task matches our match name if (taskHost.Name.StartsWith(TaskNameFilter, StringComparison.OrdinalIgnoreCase)) { // Build up the full object hierarchy of the task // so we can write it out for information StringBuilder path = new StringBuilder(); DtsContainer container = taskHost; while (container != null) { path.Insert(0, container.Name); container = container.Parent; if (container != null) { path.Insert(0, "\\"); } } // Write the task path // e.g. Package\Container\Event\Task Console.WriteLine(path); Console.WriteLine(); // Increment match counter for info matchCount++; } } Just for completeness, the other processing method we covered above is for event handlers, but really that just calls back to the executables. This same method is called in our main package method, but it was omitted for brevity here. private static void ProcessEventHandlers(DtsEventHandlers eventHandlers, ref int matchCount) { foreach (DtsEventHandler eventHandler in eventHandlers) { ProcessExecutables(eventHandler.Executables, ref matchCount); } } As hopefully the code demonstrates, executables (Microsoft.SqlServer.Dts.Runtime.Executable) are the workers, but within them you can nest more executables (except for task tasks).Executables themselves can have event handlers which can in turn hold more executables. I have tried to illustrate this highlight the relationships in the following diagram. Download Sample code project TaskSearch.zip (11KB)

    Read the article

  • Silverlight Cream for November 08, 2011 -- #1165

    - by Dave Campbell
    In this Issue: Brian Noyes, Michael Crump, WindowsPhoneGeek, Erno de Weerd, Jesse Liberty, Derik Whittaker, Sumit Dutta, Asim Sajjad, Dhananjay Kumar, Kunal Chowdhury, and Beth Massi. Above the Fold: Silverlight: "Working with Prism 4 Part 1: Getting Started" Brian Noyes WP7: "Getting Started with the Coding4Fun toolkit Tile Control" WindowsPhoneGeek LightSwitch: "How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Working with Prism 4 Part 1: Getting Started Brian Noyes has a series starting at SilverlightShow about Prism 4 ... this is the first one, so a good time to jump in and pick up on an intro and basic info about Prism plus building your first Prism app. 10 Laps around Silverlight 5 (Part 5 of 10) Michael Crump has Part 5 of his 10-part Silverlight 5 investigation up at SilverlightShow talking about all the various text features added in Silverlight 5 Beta: Text Tracking and Leading, Linked and MultiColumn, OpenType, etc. Getting Started with the Coding4Fun toolkit Tile Control WindowsPhoneGeek takes on the Tile control from the Coding4Fun toolkit... as usual, great tutorial... diagrams, code, explanation Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 2 CouchDB, Cloudant and Hammock Erno de Weerd has Part 2 of his trilogy and he's trying to beat David Anson for the long title record :) ... in this episode, he's adding in cloud storage to the mix in a 35-step tutorial. Background Audio Jesse Liberty's talking about background Audio... and no not the Muzak in the elevator (do they still have that?) ... he's tlking about the WP7.1 BackgroundAudioPlayer Using the ToggleSwitch in WinRT/Metro (for C#) Derik Whittaker shows off the ToggleSwitch for WinRT/Metro... not a lot to be said about it, but he says it all :) Part 19 - Windows Phone 7 - Access Phone Contacts Sumit Dutta has Part 19! of his WP7 series up... talking today about getting a phone number from the directory using the PhoneNumberChooserTask ContextMenu using MVVM Asim Sajjad shows how to make the Context Menu ViewModel friendly in this short tutorial. Code to make call in Windows Phone 7 Dhananjay Kumar's latest WP7 post is explaining how to make a call programmatically using the PhoneCallTask launcher. Silverlight Page Navigation Framework - Basic Concept Kunal Chowdhury has a 3-part tutorial series on Silverlight Navigation up. This is the first in the series, and he hits the basics... what constitutes a Page, and how to get started with the navigation framework. How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch Beth Massi's latest LightSwitch post is on using the Data Designer to easily crete and model database tables... during development this is in SQL Express, but can be deployed to most SQL server db you like Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Identity Propagation across Web and Web Service - 11g

    - by Prakash Yamuna
    I was on a customer call recently and this topic came up. In fact since this topic seems to come up fairly frequently - I thought I would describe the recommended model for doing SSO for Web Apps and then doing Identity Propagation across the Back end web services. The Image below shows a typical flow: Here is a more detailed drill down of what happens at each step of the flow (the number in red in the diagram maps to the description below of the behind the scenes processing that happens in the stack). [1] The Web App is protected with OAM and so the typical SSO scenario is applicable. The Web App URL is protected in OAM. The Web Gate intercepts the request from the Browser to the Web App - if there is an OAM (SSO) token - then the Web Gate validates the OAM token. If there is no SSO token - then the user is directed to the login page - user enters credentials, user is authenticated and OAM token is created for that browser session. [2] Once the Web Gate validates the OAM token - the token is propagated to the WLS Server where the Web App is running. You need to ensure that you have configured the OAM Identity Asserter in the Weblogic domain. If the OAM Identity Asserter is configured, this will end up creating a JAAS Subject. Details can be found at: http://docs.oracle.com/cd/E23943_01/doc.1111/e15478/webgate.htm#CACIAEDJ [3] The Web Service client (in the Web App) is secured with one of the OWSM SAML Client Policies. If secured in this fashion, the OWSM Agent creates a SAML Token from the JAAS Subject (created in [2] by the OAM Identity Asserter) and injects it into the SOAP message. Steps for securing a JEE JAX-WS Proxy Client using OWSM Policies are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#BABBHHHC Note: As shown in the diagram - instead of building a JEE Web App - you can also use WebCenter and build portlets. If you are using WebCenter then you can follow the same architecture. Only the steps for securing WebCenter Portlets with OWSM is different. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_wss.htm#CIHEBAHB [4] The SOA Composite App is secured with OWSM SAML Service policy. OWSM Agent intercepts the incoming SOAP request and validates the SAML token and creates a JAAS Subject. [5] When the SOA Composite App tries to invoke the OSB Proxy Service, the SOA Composite App "Reference" is secured with OWSM SAML Client Policy. Here again OWSM Agent will create a new SAML Token from the JAAS Subject created in [4] by the OWSM Agent and inject it into the SOAP message. Steps for securing SOA Composite Apps (Service, Reference, Component) are documented at: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGDGIHD [6] When the request reaches the OSB Proxy Service, the Proxy Service is again secured with the OWSM SAML Token Service Policy. So the same steps are performed as in [4]. The end result is a JAAS Subject. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} [7] When OSB needs to invoke the Business App Web Service, it goes through the OSB Business Service. The OSB Business Service is secured with OWSM SAML Client Policy and step [5] is repeated. Steps for securing OSB Proxy Service and OSB Business Services are document at: http://docs.oracle.com/cd/E23943_01/admin.1111/e15867/proxy_services.htm#OSBAG1097[8] Finally when the message reaches the Business App Web Service, this service is protected by OWSM SAML Service policy and step [4] is repeated by the OWSM Agent. Steps for securing Weblogic Web Services, ADF Web Services, etc are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGCJDIF In the above description for purposes of brevity - I have not described which OWSM SAML policies one should use; OWSM ships with a number of SAML policies, I briefly described some of the trade-offs involved with the various SAML policies here. The diagram above and the accompanying description of what is happening in each step of the flow - assumes you are using "SAML SV" or SAML Bearer" based policies without an STS.

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • how to develop a common pool of functions?

    - by user975234
    I need to develop an application which runs on the web as well as on mobile platform. I want to make something like a directory where i hold my common functions in respect to web and mobile platform. This is the diagram which describes what i exactly want: I just want to know how do i implement this thing? If you can help me with the technical details that would be great! P.S: I am new to web app and stuff!

    Read the article

  • Undocumented Query Plans: Equality Comparisons

    - by Paul White
    The diagram below shows two data sets, with differences highlighted: To find changed rows using TSQL, we might write a query like this: The logic is clear: join rows from the two sets together on the primary key column, and return rows where a change has occurred in one or more data columns.  Unfortunately, this query only finds one of the expected four rows: The problem, of course, is that our query does not correctly handle NULLs.  The ‘not equal to’ operators <> and != do not evaluate...(read more)

    Read the article

  • 3rd Party Tools: dbForge Studio for SQL Server

    - by Greg Low
    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt. Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine. I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users. I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version. It was pleasing to see a printing option in the diagramming tool. I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok. I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available. When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful: The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality. I didn't do much with the reporting options as I use SQL Server Reporting Services. Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

    Read the article

  • Algorithm to find average position

    - by Simran kaur
    In the given diagram, I have the extreme left and right points, that is -2 and 4 in this case. So, obviously, I can calculate the width which is 6 in this case. What we know: The number of partitions:3 in this case The partition number at at any point i.e which one is 1st,second or third partition (numbered starting from left) What I want: The position of the purple line drawn which is positio of average of a particular partition So, basically I just want a generalized formula to calculate position of the average at any point.

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • WebLogic Server 11gR1 Interactive Quick Reference

    - by JuergenKress
    The WebLogic Server 11gR1 Administration interactive quick reference is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner. Each interactive type presents data that may be available in the documentation (in the case of Oracle products), but presents it in a way that is more intuitive and useful to a user of Oracle products because it displays data the way it is used in a real world, best practice scenario. For example, the architectural diagram interactive type provides an image of an architectural diagram that is typically larger than a single slide or paper. The image is scrollable and provides zoom capabilities to easily and clearly view any part of the image. The image itself contains a hotspot map that you can click to get more information about a feature, including reference links to the documentation in question. Linking the visual image of the component and where it fits in the overall architecture of the product, or technology in use, to the technical explanation and how-to materials related to that component is something not offered by the documentation. In a future release, the poster will also enable you to drill down even further into the individual subsystems in nested diagrams to look at the details of that subsystem. In short, the interactive posters are good at showing you the big picture, then quickly and easily getting you to the detailed information you need. In an instant, you can see where a technical component fits into an overall architecture, and zero in on the nitty-gritty details that show you how to do it yourself. Note: This is a first initial release with more features in development. Currently known information: Only Firefox 8.0 and higher is known to work with this product. This product may work with Chrome and Safari browsers, but is known to have issues in Internet Explorer at this time. Smartphones, such as iPads and iPhones, are partially supported WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic server quick reference,weblogic overview,weblogic 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to process large files in NetLogo? [closed]

    - by user65597
    I am running into problems in NetLogo with large *.csv / *.txt files. The documents can consist of about 1 million data sets and I need to read them (to eventually create a diagram based on the data). With the most straightforward source code, my program needs about 2 minutes to process these files. How should I approach reading such large data files faster in NetLogo? Is NetLogo even suitable for such tasks (as it seems to be designed more for teaching and learning)?

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Designing Efficient SQL: A Visual Approach

    Sometimes, it is a great idea to push away the keyboard when tackling the problems of an ill-performing, complex, query, and take up pencil and paper instead. By drawing a diagram to show of all the tables involved, the joins, the volume of data involved, and the indexes, you'll see more easily the relative efficiency of the possible paths that your query could take through the tables.

    Read the article

  • Using Drupal to build a directory listing? [on hold]

    - by Jim
    I am trying to create a form inside Drupal that will allow me to create a directory similar to this diagram: http://i.imgur.com/EtChBbG.jpg I tried looking into HTML tables but it is too basic for what I'm trying to do. How do I create a directory that can archive data in an alphabetical ordering? It also has to be able to sort by letters and other categories. Does anyone have an idea of how I should go about doing this? Thanks!

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • CodePlex Daily Summary for Wednesday, November 09, 2011

    CodePlex Daily Summary for Wednesday, November 09, 2011Popular ReleasesMapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Paint: Kinect Paint 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details. Webdeploy packages sha1 checksum: e6bb913e591543ab292a753d1a16cdb779488c10?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Microsoft Media Platform: Player Framework: MMP Player Framework 2.6 (Silverlight and WP7): Additional DownloadsSMFv2.6 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...New ProjectsBBCode Orchard module: Makes the usage of various BBCodes available by adding a new text flavor (bbcode)Binary to Bytes: Covert a binary file to a .byte declaration for assembler systems.BrainfuckOS: -- English -- An operating system developed with Cosmos in C# and runnning special brainfuck-code. -- Deutsch -- Ein mit Cosmos in C# entwickeltes Betriebssystem, welches spezialisierten Brainfuck-Code ausführt.Cardiff WPUG: This is a Windows Phone 7 app, written using C# and using the MVVMLight framework. This app is intended as a training app for anyone interested in programming for Windows Phone and looking for a place to start. The app is used to show upcoming user group meetings, showcase members apps and show the latest news from the Windows Phone User Group in the UK. I'll be keeping this app up to date with the latest framework and will also be submitting versions to the marketplace (the current ve...Chapters2Markers: A simple utility to convert from the chapters format used by mkvextract to the marker format used by Expression Encoder.Confirm Leave Orchard Module: Add a content part that when attached to types, displays a confirm message when the user wants to leave the page despite having modified content in the editor.cp2011project: cp2011 projectCP3046 Sample Project in Beijing: This is a sample for CP3046 ICT Project in Beijing.Current Cost plug-in for HouseBot: This plug-in will interface with cc128 current cost module and display in HouseBot automation software. Please note that it requires the c# wrapper to work found here: http://www.housebot.com/forums/viewtopic.php?f=4&t=856395Delete Inactive TS Ports: The presence of Inactive TS Ports causes systems to hang or become sluggish and unresponsive. Printer redirection also suffers when there are a lot of Inactive TS Ports presents in the machine.Demo Team Exploer: demoDNN Quick Form: DNN Quick Form is a module that makes creating simple contact forms extremely easy. This is designed for those that extra flexibility when creating a simple contact form. Build your own form in the environment of your choice. Use all the standard ASP.NET Form controls to build the perfect form. The initial release only supports emailing the form to designated users. REQUIRES DotNetNuke 6.1Duet Enterprise: Create external list item workflow activity for SPD: This workflow activity for SharePoint designer creates item in exteral list and set list item property values from activity properties. Initially I develop this workflow activity to use it with Duet Enterprise. It creates new SAP entity. This activity utilize BCS object model.EntGP: Este es proyecto es para gestionar proyectosFly2Cloud: fly2cloud use for upload all file in folder to cloud blob storage with '/' delimiters. Usage 1. open _Fly2Cloud.exe.config_ for edit _ACCOUNTNAME_ & _ACCESSKEY_ 2. excute -> fly2cloud x:\xxx containerFolders size WPF: Simple WPF application to graphical show the occupied space by files and folders.Jogo: Jogo das coresLast Light: Last LightmEdit: simple text editor with notes organized with help of tree-view control.M-Files Autotask Connector: Connector project to read data from Autotask to M-Files using the connections to external databases. Using the connector you can e.g. replicate accounts, contacts and projects from Autotask to M-Files.My KB: Just my KBmysoccerdata: Allow owner to collect soccer schedule as matches going on around the worldNGS: Sistema ERP para agronegócio.NTerm: An open source .NET based terminal emulator.PC Trabalho 1 .Net: PC Trabalho 1 .NetPega Topeira: Jogo desenvolvido como projeto final do curso de C# da UFSCar Sorocaba (2011).Sequence Quality Control Studio (SeQCoS): Sequence Quality Control Studio (SeQCoS) is an open source .NET software suite designed to perform quality control (QC) of massively parallel sequencing reads. It includes tools for evaluating sequence and base quality of reads, as well as a set of basic post-QC sequence manipulation tools. SeQCoS was written in C# and integrates functionality provided by .NET Bio and Sho libraries.series operation system: series operation system. contact:694643393 ??SharePoint MUI Resource File Helper: This simple Windows Application allows you to easily compare SharePoint Resource files, allowing you to easily replicate keys across various files.SSIS Business Rules Component: A SSIS Custom Component that allows for the development and application of complex and hierarchical business rules within the SSIS data flow.TestBox: testbox

    Read the article

  • VLAN support on Juniper EX - 2200 switches

    - by liv2hak
    I have 6 Juniper switches (EX - 2200) connected to each other as shown in the network topology below. I have two PC's that I am using PC1 - (used for configuring the 6 switches via minicom) PC2 - to monitor the traffic between the switches via the Ports that are marked with arrows in the diagram. STEP 1: I create a new vlan On Switch 3 (SW3) that includes Port 12 and Port 22. I also assign l3-interface to the vlan (vlan_2) with ip address - 192.168.1.7. Now I plug-in Port 0 of Switch 3 on PC2. Now I try pinging 192.168.1.7 from PC2 (192.168.1.10) I want to know what will happen? My postulation is that I will not be able to ping SW3 from PC2.This is because SW3 (Port 12 and Port 22) is a part of a vlan_2 and vlan_2 logically breaks up broadcast domains and so 192.168.1.7 will not be reachable from 192.168.1.10. Now I have an l3-interface on SW1 with IP 192.168.1.1 using default vlan( vlan-id 0). Similarly I have enabled IP on SW2 - 192.168.1.2 SW3 - 192.168.1.3 SW4 - 192.168.1.4 SW5 - 192.168.1.5 SW6 - 192.168.1.6 all using default vlan. Now I plug in Port 12 of SW3 (blue cable) into the PC2. I try to ping 192.168.1.1 from PC2. What will happen at this stage.? My postulation is that I will be able to ping switch 1.Is this correct? Also another question is that can a single port on a Switch be added multiple VLANS? I am a beginner at network configuration? Any help would be highly appreciated. (Please ignore the CISCO symbol on the switches in the diagram.All swithes are Junper EX 22-00.)

    Read the article

  • VLAN with trunk to avoid broadcast storm in a network with redundant paths

    - by liv2hak
    I have 6 Juniper switches (EX - 2200) connected to each other as shown in the network topology. I have two PC's that I am using PC1 - (used for configuring the 6 switches via minicom) PC2 - to monitor the traffic between the switches via the Ports that are marked with arrows in the diagram. STEP 1: I create a new vlan On Switch 3 (SW3) that includes Port 12 and Port 22. I also assign l3-interface to the vlan (vlan_2) with ip address - 192.168.1.7. Now I plug-in Port 0 of Switch 3 on PC2. Now I try pinging 192.168.1.7 from PC2 (192.168.1.10) I want to know what will happen? My postulation is that I will not be able to ping SW3 from PC2.This is because SW3 (Port 12 and Port 22) is a part of a vlan_2 and vlan_2 logically breaks up broadcast domains and so 192.168.1.7 will not be reachable from 192.168.1.10. Now I have an l3-interface on SW1 with IP 192.168.1.1 using default vlan( vlan-id 0). Similarly I have enabled IP on SW2 - 192.168.1.2 SW3 - 192.168.1.3 SW4 - 192.168.1.4 SW5 - 192.168.1.5 SW6 - 192.168.1.6 all using default vlan. I create VLAN2 with the following configuration SW3 - Port 12,Port 22. SW6 - Port 14 I create VLAN3 with the following configuration SW3 - Port 0 SW6 - Port 0 I also configure a VLAN trunk between SW3 and SW6 using the following commands. edit interfaces ge-0/0/12 set unit 0 family ethernet-switching port-mode trunk edit interfaces ge-0/0/12 set unit 0 family ethernet-switching vlan members all There is a redundant path in the network as the loop between SW3 and SW6 is closed.There is no broadcast storm in the network? What is the reason for this? I have not enabled STP or RSTP.still there is no broadcast storm.what is the reason for this. (Please ignore the CISCO symbol on the switches in the diagram.All swithes are Junper EX 22-00.)

    Read the article

  • Rack layout for future growth

    - by bleything
    We're getting ready to move to a new colo facility and I'm designing the rack layout. While we have a full rack, we only have 12U worth of hardware right now: 1x 1U switch 7x 1U servers 1x 2U server 1x 2U disk shelf The colo facility requires us to front-mount the switch and use a 1U brush strip, so we'll be using a total of 13U of space. Regarding growth, I'm reasonably sure we'll be adding another 4U in servers, 1-2U of network gear, and 2-4U of storage in the mid-term. Specific questions I'm hoping to get help with: where should I mount the switch? the LEDs are on top... should I group the servers by function with space for adding new machines? as an alternative, should I group servers based on whether they are production or staging? where in the rack should I start? in the middle? at the top? at the bottom? equally spaced? Here's a silly little ASCII diagram of what I'm thinking right now. Please feel free to tear my design apart, I've really no idea what I'm doing :) Any advice is very welcome. edit: to be clear, the colo is providing redundant power with UPS and generator, so that's why there's no power gear in the plan, except for the 0U PDU that I didn't diagram. 42 | -- switch ---------------------- 41 | -- brush strip ----------------- 40 | ~~ reserved for second switch ~~ 39 | ~~ reserved for firewall ~~~~~~~ 38 | 37 | -- admin01 --------------------- 36 | 35 | -- vm01 ------------------------ 34 | -- vm02 ------------------------ 33 | ~~ reserved for vm03 ~~~~~~~~~~~ 32 | ~~ reserved for vm04 ~~~~~~~~~~~ 31 | ~~ reserved for vm05 ~~~~~~~~~~~ 30 | 29 | -- web01 ----------------------- 28 | -- web02 ----------------------- 27 | ~~ reserved for web03 ~~~~~~~~~~ 26 | ~~ reserved for web04 ~~~~~~~~~~ 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | -- db01 ------------------------ 15 | +- disks ----------------------+ 14 | +------------------------------+ 13 | ~~ reserved for more ~~~~~~~~~~~ 12 | ~~ db01 disks ~~~~~~~~~~~~~~~~~~ 11 | 10 | +- db02 -----------------------+ 9 | +------------------------------+ 8 | ~~ reserved for db02 ~~~~~~~~~~~ 7 | ~~ disks ~~~~~~~~~~~~~~~~~~~~~~~ 6 | ~~ reserved for more ~~~~~~~~~~~ 5 | ~~ db02 disks ~~~~~~~~~~~~~~~~~~ 4 | 3 | 2 | 1 |

    Read the article

  • Where do I connect the HDD LED wires on my RAID adapter?

    - by Giffyguy
    I'm using a Promise FastTrak TX8660 with RAID 5. The manual (and Google) just doesn't seem to explain how exactly to connect a standard two-pin HDD LED wire to the eight available pins on the card. The Manual just says - To connect your LED, follow the following diagram: The card itself resembles the diagram: But it doesn't make any sense to me. All I have is a two-pin connecter for HDD LED on the front of my computer case. I don't need anything fancy like the fault LED or seperate indicators for each drive. I just want to be able to see when my RAID 5 array is working, that's all. I don't know what the "R" and "G" stand for, but my HDD LED wires are red and white. I tried connecting the red wire to the "R" pin and the white wire to the "G" pin, but that just makes the LED on the front of my case light up indefinitely, even when the computer is idle. Which pins am I suppose to connect the HDD LED header to for basic activity indication?

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • netkit: why cant my router 4 pc4 ping my router 1 pc1 - how can I solve this please?

    - by donok
    Below I have four routers connected but my pc1 on r1 cannot ping my pc4 on r4 and also my pc2 on r2 cant ping my pc4 on r4 and vice versa. Below is a network diagram: and the configurations are below that, could anyone help me please on making them accessible? ![connecting 4 routers][1] I cant post my diagram on serverfault(less than 10 rep) so I did on stackoverflow and asked the same question. pc1: ifconfig eth0 195.11.14.5 netmask 255.255.255.0 broadcast 195.11.14.255 up route add default gw 195.11.14.1 dev eth0 pc2.start: ifconfig eth0 200.1.1.7 netmask 255.255.255.0 broadcast 200.1.1.255 up route add default gw 200.1.1.1 dev eth0 pc3: ifconfig eth0 195.20.14.9 netmask 255.255.255.0 broadcast 195.20.1.255 up route add default gw 195.20.14.1 dev eth0 pc4: ifconfig eth0 200.2.1.11 netmask 255.255.255.0 broadcast 200.2.1.255 up route add default gw 200.2.1.1 dev eth0 r1: ifconfig eth0 195.11.14.1 netmask 255.255.255.0 broadcast 195.11.14.255 up ifconfig eth1 100.0.0.9 netmask 255.255.255.252 broadcast 100.0.0.11 up route add -net 200.1.1.0 netmask 255.255.255.0 gw 100.0.0.10 dev eth1 route add default gw 100.0.0.10 lab.conf: if you need more on that Ill post it up but I think most of the info is there. Any help would be greatly appreciated especially trying to make a connection between pc4 and pc1, even if you think it does not make sense please explain why. Thank you.

    Read the article

  • ActiveMQ - "Cannot send, channel has already failed" every 2 seconds?

    - by quanta
    ActiveMQ 5.7.0 In the activemq.log, I'm seeing this exception every 2 seconds: 2013-11-05 13:00:52,374 | DEBUG | Transport Connection to: tcp://127.0.0.1:37501 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Due to this keyword InactivityIOException, the first thing comes to my mind is InactivityMonitor, but the strange thing is MaxInactivityDuration=30000: 2013-11-05 13:11:02,672 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-2 Moreover, I also didn't see something like this: No message received since last read check for ... or: Channel was inactive for too (30000) long Do a netstat, I see these connections in TIME_WAIT state: tcp 0 0 127.0.0.1:38545 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38544 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38522 127.0.0.1:61616 TIME_WAIT - Here're the output when running tcpdump: Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x7b6a (31594) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xc063 [correct] [Good: True] [Bad: False] Source: 127.0.0.1 (127.0.0.1) Destination: 127.0.0.1 (127.0.0.1) Transmission Control Protocol, Src Port: 61616 (61616), Dst Port: 54669 (54669), Seq: 1, Ack: 2, Len: 244 Source port: 61616 (61616) Destination port: 54669 (54669) [Stream index: 11] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgement number: 2 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgement: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 256 [Calculated window size: 32768] [Window size scaling factor: 128] Checksum: 0xff1c [validation disabled] [Good Checksum: False] [Bad Checksum: False] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2304161892, TSecr 2304161891 Kind: Timestamp (8) Length: 10 Timestamp value: 2304161892 Timestamp echo reply: 2304161891 [SEQ/ACK analysis] [Bytes in flight: 244] Constrained Application Protocol, TID: 240, Length: 244 00.. .... = Version: 0 ..00 .... = Type: Confirmable (0) .... 0000 = Option Count: 0 Code: Unknown (0) Transaction ID: 240 Payload Content-Type: text/plain (default), Length: 240, offset: 4 Line-based text data: text/plain [truncated] \001ActiveMQ\000\000\000\t\001\000\000\000<DE>\000\000\000\t\000\fMaxFrameSize\006\177<FF><FF><FF><FF> <FF><FF><FF>\000\tCacheSize\005\000\000\004\000\000\fCacheEnabled\001\001\000\022SizePrefixDisabled\001\000\000 MaxInactivityDurationInitalDelay\006\ It is very likely a tcp port check. This is what I see when trying telnet from another host: 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: java.io.EOFException | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///10.8.20.9:46775@61616 java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:275) at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:229) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:221) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Unregistering MBean org.apache.activemq:BrokerName=localhost,Type=Connection,ConnectorName=ope nwire,ViewType=address,Name=tcp_//10.8.20.9_46775 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:/ //10.8.20.9:46775@61616 2013-11-05 16:12:41,073 | DEBUG | Stopping connection: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Stopping transport tcp:///10.8.20.9:46775@61616 | org.apache.activemq.transport.tcp.TcpTranspo rt | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.Threa dPoolExecutor@23cc2a28 | org.apache.activemq.thread.TaskRunnerFactory | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Closed socket Socket[addr=/10.8.20.9,port=46775,localport=61616] | org.apache.activemq.transpo rt.tcp.TcpTransport | ActiveMQ Task-1 2013-11-05 16:12:41,074 | DEBUG | Forcing shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor@23cc2a28 | org.apache.activemq.util.ThreadPoolUtils | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Stopped transport: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Connection Stopped: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,902 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-5 So the question is: how can I find out the process that is trying to connect to my ActiveMQ (from localhost) every 2 seconds?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >