Search Results

Search found 9467 results on 379 pages for 'tutor flowchart policy policy and procedure manual procedure'.

Page 84/379 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How to add an extra plist property using CMake?

    - by Jesse Beder
    I'm trying to add the item <key>UIStatusBarHidden</key><true/> to my plist that's auto-generated by CMake. For certain keys, it appears there are pre-defined ways to add an item; for example: set(MACOSX_BUNDLE_ICON_FILE ${ICON}) But I can't find a way to add an arbitrary property. I tried using the MACOSX_BUNDLE_INFO_PLIST target property as follows: I'd like the resulting plist to be identical to the old one, except with the new property I want, so I just copied the auto-generated plist and set that as my template. But the plist uses some Xcode variables, which also look like ${foo}, and CMake grumbles about this: Syntax error in cmake code when parsing string <string>com.bedaire.${PRODUCT_NAME:identifier}</string> syntax error, unexpected cal_SYMBOL, expecting } (47) Policy CMP0010 is not set: Bad variable reference syntax is an error. Run "cmake --help-policy CMP0010" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. In any case, I'm not even sure that this is the right thing to do. I can't find a good example or any good documentation about this. Ideally, I'd just let CMake generate everything as before, and just add a single extra line. What can I do?

    Read the article

  • How do I call +class methods in Objective C without referencing the class?

    - by TimM
    I have a series of "policy" objects which I thought would be convenient to implement as class methods on a set of policy classes. I have specified a protocol for this, and created classes to conform to (just one shown below) @protocol Counter +(NSInteger) countFor: (Model *)model; @end @interface CurrentListCounter : NSObject <Counter> +(NSInteger) countFor: (Model *)model; @end I then have an array of the classes that conform to this protocol (like CurrentListCounter does) +(NSArray *) availableCounters { return [[[NSArray alloc] initWithObjects: [CurrentListCounter class],[AllListsCounter class], nil] autorelease]; } Notice how I am using the classes like objects (and this might be my problem - in Smalltalk classes are objects like everything else - I'm not sure if they are in Objective-C?) My exact problem is when I want to call the method when I take one of the policy objects out of the array: id<Counter> counter = [[MyModel availableCounters] objectAtIndex: self.index]; return [counter countFor: self]; I get a warning on the return statement - it says -countFor: not found in protocol (so its assuming its an instance method where I want to call a class method). However as the objects in my array are instances of class, they are now like instance methods (or conceptually they should be). Is there a magic way to call class methods? Or is this just a bad idea and I should just create instances of my policy objects (and not use class methods)?

    Read the article

  • Remote Desktop Connection Only Works One Way

    - by advocate
    I can't get my desktop to connect to my laptop through remote desktop connection. Unfortunately I can only get my laptop to connect to my desktop (quite useless). Desktop: Windows 7 Ultimate 64 Bit SP1 Windows firewall is off for all 3 profiles (domain / private / public) Remote desktop connection is installed and set to allow all connections Under running services is: Running Remote Desktop Configuration Running Remote Desktop Services Running Remote Desktop Services UserMode Port Redirector Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) Laptop: Windows 7 Home Premium 64 Bit SP1 Windows firewall is off for all3 profiles (domain / private / public) Remote desktop connection is installed and set to 'Allow Remote Assistance connections to this computer' Under running services is: Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Desktop Configuration Stopped Remote Desktop Services Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) It should be noted that the Laptop that I'm trying to connect to is an Alienware and might be running some wonky Dell settings. Also, the settings are slightly different for remote desktop connection as it's a Home edition of Windows and not Ultimate like my desktop. Finally, both computers are on the same Homegroup so that RDC can be accessed by one click through the network section of Windows. They're also on the same workgroup, MSHOME, just to see if that helps.

    Read the article

  • What's the proper way to prepare chroot to recover a broken Linux installation?

    - by ~quack
    This question relates to questions that are asked often. The procedure is frequently mentioned or linked to offsite, but is not often clearly and correctly stated. In an objective to concentrate useful information in one place, this question seeks to provide a clear, correct reference for this procedure. What are the proper steps to prepare a chroot environment for a recovery procedure? In many situations, repairing a broken Linux installation is best done from within the installation. But if the system won't boot, how do you fix it from within? Let's assume you manage to boot into an alternate system. Once there, you need to access your broken installation in order to fix it. Many recovery How-Tos recommend using chroot in order to run programs as if you are actually booted into the broken installation. What is the basic procedure? Are there accepted best-practices to follow? What variables need to be considered in order to adapt the basic preparation steps to a particular recovery task? As this is Community Wiki, feel free to edit this question to improve it as well.

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Identity Propagation across Web and Web Service - 11g

    - by Prakash Yamuna
    I was on a customer call recently and this topic came up. In fact since this topic seems to come up fairly frequently - I thought I would describe the recommended model for doing SSO for Web Apps and then doing Identity Propagation across the Back end web services. The Image below shows a typical flow: Here is a more detailed drill down of what happens at each step of the flow (the number in red in the diagram maps to the description below of the behind the scenes processing that happens in the stack). [1] The Web App is protected with OAM and so the typical SSO scenario is applicable. The Web App URL is protected in OAM. The Web Gate intercepts the request from the Browser to the Web App - if there is an OAM (SSO) token - then the Web Gate validates the OAM token. If there is no SSO token - then the user is directed to the login page - user enters credentials, user is authenticated and OAM token is created for that browser session. [2] Once the Web Gate validates the OAM token - the token is propagated to the WLS Server where the Web App is running. You need to ensure that you have configured the OAM Identity Asserter in the Weblogic domain. If the OAM Identity Asserter is configured, this will end up creating a JAAS Subject. Details can be found at: http://docs.oracle.com/cd/E23943_01/doc.1111/e15478/webgate.htm#CACIAEDJ [3] The Web Service client (in the Web App) is secured with one of the OWSM SAML Client Policies. If secured in this fashion, the OWSM Agent creates a SAML Token from the JAAS Subject (created in [2] by the OAM Identity Asserter) and injects it into the SOAP message. Steps for securing a JEE JAX-WS Proxy Client using OWSM Policies are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#BABBHHHC Note: As shown in the diagram - instead of building a JEE Web App - you can also use WebCenter and build portlets. If you are using WebCenter then you can follow the same architecture. Only the steps for securing WebCenter Portlets with OWSM is different. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_wss.htm#CIHEBAHB [4] The SOA Composite App is secured with OWSM SAML Service policy. OWSM Agent intercepts the incoming SOAP request and validates the SAML token and creates a JAAS Subject. [5] When the SOA Composite App tries to invoke the OSB Proxy Service, the SOA Composite App "Reference" is secured with OWSM SAML Client Policy. Here again OWSM Agent will create a new SAML Token from the JAAS Subject created in [4] by the OWSM Agent and inject it into the SOAP message. Steps for securing SOA Composite Apps (Service, Reference, Component) are documented at: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGDGIHD [6] When the request reaches the OSB Proxy Service, the Proxy Service is again secured with the OWSM SAML Token Service Policy. So the same steps are performed as in [4]. The end result is a JAAS Subject. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} [7] When OSB needs to invoke the Business App Web Service, it goes through the OSB Business Service. The OSB Business Service is secured with OWSM SAML Client Policy and step [5] is repeated. Steps for securing OSB Proxy Service and OSB Business Services are document at: http://docs.oracle.com/cd/E23943_01/admin.1111/e15867/proxy_services.htm#OSBAG1097[8] Finally when the message reaches the Business App Web Service, this service is protected by OWSM SAML Service policy and step [4] is repeated by the OWSM Agent. Steps for securing Weblogic Web Services, ADF Web Services, etc are documented at: http://docs.oracle.com/cd/E23943_01/web.1111/b32511/attaching.htm#CEGCJDIF In the above description for purposes of brevity - I have not described which OWSM SAML policies one should use; OWSM ships with a number of SAML policies, I briefly described some of the trade-offs involved with the various SAML policies here. The diagram above and the accompanying description of what is happening in each step of the flow - assumes you are using "SAML SV" or SAML Bearer" based policies without an STS.

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • AVG Rescue CD : détecter des virus sans recourir à votre OS, Par Poinsot Benjamin

    Bonjour, je partage avec vous aujourd'hui mon dernier article en date, à savoir AVG Rescue CD : détecter des virus sans recourir à votre OS. Vous pouvez le consulter à cette adresse : http://bpoinsot.developpez.com/tutor...avg-rescue-cd/. Synopsis : Citation: AVG Rescue CD offre la possibilité de réaliser un scan antivirus de votre ordinateur depuis une distribution Linux de type LiveCD. Ainsi, vous profitez de la...

    Read the article

  • Résumé de la keynote Day One de la conférence MIX10, par Jérôme Lambert

    Bonjour à tous, Vous trouverez ci-dessous le lien vers le résumé complet de la première keynote de conférence MIX10 qui s'est tenue à Las Vegas ce 15 Mars 2010. Je vous invite à profiter de cette discussion pour poser toutes vos questions concernant ce qui a été dit durant cette conférence que ce soit à propos de Silverlight ou de Windows Phone 7 Series. Lien vers l'article : http://jlambert.developpez.com/tutor...ference-mix10/ Jérôme...

    Read the article

  • Is there a procedural graphical programming environment?

    - by Marc
    I am searching for a graphical programming environment for procedural programming in which you can integrate some or all of the common sources of calculation procedures, such as Excel sheets, MATLAB scripts or even .NET assemblies. I think of something like a flowchart configurator in which you define the procedures via drag& drop using flow-statements (if-else, loops, etc.). Do you know of any systems heading in this direction?

    Read the article

  • How can i manage my personal notes , code snippets files in one place online [closed]

    - by user1758043
    Whenever i work on any project , then i have so much notes , diagrams files , image s, brainstorming ideas which i want to keep. i want to put them in one place so that i can see the history of my work. Is there any toll whichere i can store this online. my company is using confluence but thats costly for me. I want something for single user but online in clou where i can store Notes Code snippets Diagrams , flowchart Attah files , images Books marks , sites

    Read the article

  • How to practice typing of programmer keys such as tilde, pipe and programmer quote?

    - by user7893
    It is nice that there are services such as TypeRacer where you can practice casual writing but I want to practice programmer keys, covers more numbers and keys not used by regular typist. There was some tutor with which I practiced some programmer keys and noticed that my speed dropped dramatically from 70-80 wpm to even about 15-30 wpm, it also trains different muscles. So how can I practice just programming keys with programming texts or just random code pieces?

    Read the article

  • What legal considerations do I need to have when programming?

    - by JustJohn
    Recently I worked with a group to create a logging system for tutoring labs to track Tutor hours, and usage of the labs. We had to change the design a fair amount to be in compliance with FERPA. My question stems from the curiosity that in my course of study there has never been a real mention of how people in this field have to deal with complying with the law in their work. So I would like to know how much programmers have to consider the law in the work they do.

    Read the article

  • What legal considerations do I need to have when programming?

    - by JustJohn
    Recently I worked with a group to create a logging system for tutoring labs to track Tutor hours, and usage of the labs. We had to change the design a fair amount to be in compliance with FERPA. My question stems from the curiosity that in my course of study there has never been a real mention of how people in this field have to deal with complying with the law in their work. So I would like to know how much programmers have to consider the law in the work they do.

    Read the article

  • Choosing the right Design Pattern

    - by Carl Sagan
    I've always recognized the importance of utilizing design patterns. I'm curious as to how other developers go about choosing the most appropriate one. Do you use a series of characteristics (like a flowchart) to help you decide? For example: If objects are related, but we do not want to specify concrete class, consider Abstract When instantiation is left to derived classes, consider Factory Need to access elements of an aggregate object sequentially, try Iterator or something similar?

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • TStringList and TThread that does not free all of its memory

    - by VanillaH
    Version used: Delphi 7. I'm working on a program that does a simple for loop on a Virtual ListView. The data is stored in the following record: type TList=record Item:Integer; SubItem1:String; SubItem2:String; end; Item is the index. SubItem1 the status of the operations (success or not). SubItem2 the path to the file. The for loop loads each file, does a few operations and then, save it. The operations take place in a TStringList. Files are about 2mb each. Now, if I do the operations on the main form, it works perfectly. Multi-threaded, there is a huge memory problem. Somehow, the TStringList doesn't seem to be freed completely. After 3-4k files, I get an EOutofMemory exception. Sometimes, the software is stuck to 500-600mb, sometimes not. In any case, the TStringList always return an EOutofMemory exception and no file can be loaded anymore. On computers with more memory, it takes longer to get the exception. The same thing happens with other components. For instance, if I use THTTPSend from Synapse, well, after a while, the software cannot create any new threads because the memory consumption is too high. It's around 500-600mb while it should be, max, 100mb. On the main form, everything works fine. I guess the mistake is on my side. Maybe I don't understand threads enough. I tried to free everything on the Destroy event. I tried FreeAndNil procedure. I tried with only one thread at a time. I tried freeing the thread manually (no FreeOnTerminate...) No luck. So here is the thread code. It's only the basic idea; not the full code with all the operations. If I remove the LoadFile prodecure, everything works good. A thread is created for each file, according to a thread pool. unit OperationsFiles; interface uses Classes, SysUtils, Windows; type TOperationFile = class(TThread) private Position : Integer; TPath, StatusMessage: String; FileStringList: TStringList; procedure UpdateStatus; procedure LoadFile; protected procedure Execute; override; public constructor Create(Path: String; LNumber: Integer); end; implementation uses Form1; procedure TOperationFile.LoadFile; begin try FileStringList.LoadFromFile(TPath); // Operations... StatusMessage := 'Success'; except on E : Exception do StatusMessage := E.ClassName; end; end; constructor TOperationFile.Create(Path : String; LNumber: Integer); begin inherited Create(False); TPath := Path; Position := LNumber; FreeOnTerminate := True; end; procedure TOperationFile.UpdateStatus; begin FileList[Position].SubItem1 := StatusMessage; Form1.ListView4.UpdateItems(Position,Position); end; procedure TOperationFile.Execute; begin FileStringList:= TStringList.Create; LoadFile; Synchronize(UpdateStatus); FileStringList.Free; end; end. What could be the problem? I thought at one point that, maybe, too many threads are created. If a user loads 1 million files, well, ultimately, 1 million threads is going to be created -- although, only 50 threads are created and running at the same time. Thanks for your input.

    Read the article

  • BPA scan did not complete for one or more servers

    - by Hossein Aarabi
    In Windows Server 2012 RTM, I am trying to run the BPA. It fails, saying "BPA scan did not complete for one or more servers" Try #1: Try #2: So, I decided to enable the Turn on Script Execution (with Allow all scripts) in Local Group Policy Editor, now I get a very nice exception message :) Clicking on ignore button, BPA logs the following error message: Try #3: So, I decided to go ahead and set the execution policy for all the scopes in PowerShell to unrestricted. Again no luck. What is going on?

    Read the article

  • ASA hairpining: I basicaly want to allow 2 spokes to be able to communicate with each other.

    - by Thirst4Knowledge
    ASA Spoke to Spoke Communication I have been looking at spke to spoke comms or "hairpining" for months and have posted on numerouse forums but to no avail. I have a Hub and spoke network where the HUB is an ASA Firewall version 8.2 * I basicaly want to allow 2 spokes to be able to communicate with each other. I think that I have got the concept of the ASA Config for example: same-security-traffic permit intra-interface access-list HQ-LAN extended permit ip ASA-LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list HQ-LAN extended permit ip 192.168.99.0 255.255.255.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 access-list no-nat extended permit ip 192.168.99.0 255.255.255.0 HQ-LAN 255.255.255.0 I think my problem may be that the other spokes are not CIsco Firewalls and I need to work out how to do the alternative setups. I want to at least make sure that my firewall etup is correct then I can move onto the other spokes here is my config: Hostname ASA domain-name mydomain.com names ! interface Ethernet0/0 speed 100 duplex full nameif outside security-level 0 ip address 1.1.1.246 255.255.255.224 ! interface Ethernet0/1 speed 100 duplex full nameif inside security-level 100 ip address 192.168.240.33 255.255.255.224 ! interface Ethernet0/2 description DMZ VLAN-253 speed 100 duplex full nameif DMZ security-level 50 ip address 192.168.254.1 255.255.255.0 ! interface Ethernet0/3 no nameif no security-level no ip address ! boot system disk0:/asa821-k8.bin ftp mode passive clock timezone GMT/BST 0 dns server-group DefaultDNS domain-name mydomain.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network ASA_LAN_Plus_HQ_LAN network-object ASA_LAN 255.255.248.0 network-object HQ-LAN 255.255.255.0 access-list outside_acl remark Exchange web access-list outside_acl extended permit tcp any host MS-Exchange_server-NAT eq https access-list outside_acl remark PPTP Encapsulation access-list outside_acl extended permit gre any host MS-ISA-Server-NAT access-list outside_acl remark PPTP access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq pptp access-list outside_acl remark Intra Http access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq www access-list outside_acl remark Intra Https access-list outside_acl extended permit tcp any host MS-ISA-Server-NAT eq https access-list outside_acl remark SSL Server-Https 443 access-list outside_acl remark Https 8443(Open VPN Custom port for SSLVPN client downlaod) access-list outside_acl remark FTP 20 access-list outside_acl remark Http access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT object-group DM_INLINE_TCP_1 access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq 8443 access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq www access-list outside_acl remark For secure remote Managment-SSH access-list outside_acl extended permit tcp any host OpenVPN-Srvr-NAT eq ssh access-list outside_acl extended permit ip Genimage_Anyconnect 255.255.255.0 ASA_LAN 255.255.248.0 access-list ASP-Live remark Live ASP access-list ASP-Live extended permit ip ASA_LAN 255.255.248.0 192.168.60.0 255.255.255.0 access-list Bo remark Bo access-list Bo extended permit ip ASA_LAN 255.255.248.0 192.168.169.0 255.255.255.0 access-list Bill remark Bill access-list Bill extended permit ip ASA_LAN 255.255.248.0 Bill.15 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 Bill.5 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.149.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.160.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.165.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.144.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.140.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.152.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.153.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.163.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.157.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.167.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.156.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 North-Office-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.161.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.143.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.137.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.159.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.169.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.150.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.162.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.166.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.168.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.174.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.127.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.173.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.175.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.176.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.100.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 192.168.99.0 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 10.10.10.0 255.255.255.0 access-list no-nat extended permit ip host 192.168.240.34 Cisco-admin-LAN 255.255.255.0 access-list no-nat extended permit ip ASA_LAN 255.255.248.0 Genimage_Anyconnect 255.255.255.0 access-list no-nat extended permit ip host Tunnel-DC host HQ-SDSL-Peer access-list no-nat extended permit ip HQ-LAN 255.255.255.0 North-Office-LAN 255.255.255.0 access-list no-nat extended permit ip North-Office-LAN 255.255.255.0 HQ-LAN 255.255.255.0 access-list Car remark Car access-list Car extended permit ip ASA_LAN 255.255.248.0 192.168.165.0 255.255.255.0 access-list Che remark Che access-list Che extended permit ip ASA_LAN 255.255.248.0 192.168.144.0 255.255.255.0 access-list Chi remark Chi access-list Chi extended permit ip ASA_LAN 255.255.248.0 192.168.140.0 255.255.255.0 access-list Cla remark Cla access-list Cla extended permit ip ASA_LAN 255.255.248.0 192.168.152.0 255.255.255.0 access-list Eas remark Eas access-list Eas extended permit ip ASA_LAN 255.255.248.0 192.168.149.0 255.255.255.0 access-list Ess remark Ess access-list Ess extended permit ip ASA_LAN 255.255.248.0 192.168.153.0 255.255.255.0 access-list Gat remark Gat access-list Gat extended permit ip ASA_LAN 255.255.248.0 192.168.163.0 255.255.255.0 access-list Hud remark Hud access-list Hud extended permit ip ASA_LAN 255.255.248.0 192.168.157.0 255.255.255.0 access-list Ilk remark Ilk access-list Ilk extended permit ip ASA_LAN 255.255.248.0 192.168.167.0 255.255.255.0 access-list Ken remark Ken access-list Ken extended permit ip ASA_LAN 255.255.248.0 192.168.156.0 255.255.255.0 access-list North-Office remark North-Office access-list North-Office extended permit ip ASA_LAN 255.255.248.0 North-Office-LAN 255.255.255.0 access-list inside_acl remark Inside_ad access-list inside_acl extended permit ip any any access-list Old_HQ remark Old_HQ access-list Old_HQ extended permit ip ASA_LAN 255.255.248.0 HQ-LAN 255.255.255.0 access-list Old_HQ extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 access-list She remark She access-list She extended permit ip ASA_LAN 255.255.248.0 192.168.150.0 255.255.255.0 access-list Lit remark Lit access-list Lit extended permit ip ASA_LAN 255.255.248.0 192.168.143.0 255.255.255.0 access-list Mid remark Mid access-list Mid extended permit ip ASA_LAN 255.255.248.0 192.168.137.0 255.255.255.0 access-list Spi remark Spi access-list Spi extended permit ip ASA_LAN 255.255.248.0 192.168.162.0 255.255.255.0 access-list Tor remark Tor access-list Tor extended permit ip ASA_LAN 255.255.248.0 192.168.166.0 255.255.255.0 access-list Tra remark Tra access-list Tra extended permit ip ASA_LAN 255.255.248.0 192.168.168.0 255.255.255.0 access-list Tru remark Tru access-list Tru extended permit ip ASA_LAN 255.255.248.0 192.168.174.0 255.255.255.0 access-list Yo remark Yo access-list Yo extended permit ip ASA_LAN 255.255.248.0 192.168.127.0 255.255.255.0 access-list Nor remark Nor access-list Nor extended permit ip ASA_LAN 255.255.248.0 192.168.159.0 255.255.255.0 access-list Nor extended permit ip ASA_LAN 255.255.248.0 192.168.173.0 255.255.255.0 inactive access-list ST remark ST access-list ST extended permit ip ASA_LAN 255.255.248.0 192.168.175.0 255.255.255.0 access-list Le remark Le access-list Le extended permit ip ASA_LAN 255.255.248.0 192.168.161.0 255.255.255.0 access-list DMZ-ACL remark DMZ access-list DMZ-ACL extended permit ip host OpenVPN-Srvr any access-list no-nat-dmz remark DMZ -No Nat access-list no-nat-dmz extended permit ip 192.168.250.0 255.255.255.0 HQ-LAN 255.255.255.0 access-list Split_Tunnel_List remark ASA-LAN access-list Split_Tunnel_List standard permit ASA_LAN 255.255.248.0 access-list Split_Tunnel_List standard permit Genimage_Anyconnect 255.255.255.0 access-list outside_cryptomap_30 remark Po access-list outside_cryptomap_30 extended permit ip ASA_LAN 255.255.248.0 Po 255.255.255.0 access-list outside_cryptomap_24 extended permit ip ASA_LAN 255.255.248.0 192.168.100.0 255.255.255.0 access-list outside_cryptomap_16 extended permit ip ASA_LAN 255.255.248.0 192.168.99.0 255.255.255.0 access-list outside_cryptomap_34 extended permit ip ASA_LAN 255.255.248.0 10.10.10.0 255.255.255.0 access-list outside_31_cryptomap extended permit ip host 192.168.240.34 Cisco-admin-LAN 255.255.255.0 access-list outside_32_cryptomap extended permit ip host Tunnel-DC host HQ-SDSL-Peer access-list Genimage_VPN_Any_connect_pix_client remark Genimage "Any Connect" VPN access-list Genimage_VPN_Any_connect_pix_client standard permit Genimage_Anyconnect 255.255.255.0 access-list Split-Tunnel-ACL standard permit ASA_LAN 255.255.248.0 access-list nonat extended permit ip HQ-LAN 255.255.255.0 192.168.99.0 255.255.255.0 pager lines 24 logging enable logging timestamp logging console notifications logging monitor notifications logging buffered warnings logging asdm informational no logging message 106015 no logging message 313001 no logging message 313008 no logging message 106023 no logging message 710003 no logging message 106100 no logging message 302015 no logging message 302014 no logging message 302013 no logging message 302018 no logging message 302017 no logging message 302016 no logging message 302021 no logging message 302020 flow-export destination inside MS-ISA-Server 2055 flow-export destination outside 192.168.130.126 2055 flow-export template timeout-rate 1 flow-export delay flow-create 15 mtu outside 1500 mtu inside 1500 mtu DMZ 1500 mtu management 1500 ip local pool RAS-VPN 10.0.0.1.1-10.0.0.1.254 mask 255.255.255.255 icmp unreachable rate-limit 1 burst-size 1 icmp permit any unreachable outside icmp permit any echo outside icmp permit any echo-reply outside icmp permit any outside icmp permit any echo inside icmp permit any echo-reply inside icmp permit any echo DMZ icmp permit any echo-reply DMZ asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 nat-control global (outside) 1 interface global (inside) 1 interface nat (inside) 0 access-list no-nat nat (inside) 1 0.0.0.0 0.0.0.0 nat (DMZ) 0 access-list no-nat-dmz static (inside,outside) MS-ISA-Server-NAT MS-ISA-Server netmask 255.255.255.255 static (DMZ,outside) OpenVPN-Srvr-NAT OpenVPN-Srvr netmask 255.255.255.255 static (inside,outside) MS-Exchange_server-NAT MS-Exchange_server netmask 255.255.255.255 access-group outside_acl in interface outside access-group inside_acl in interface inside access-group DMZ-ACL in interface DMZ route outside 0.0.0.0 0.0.0.0 1.1.1.225 1 route inside 10.10.10.0 255.255.255.0 192.168.240.34 1 route outside Genimage_Anyconnect 255.255.255.0 1.1.1.225 1 route inside Open-VPN 255.255.248.0 OpenVPN-Srvr 1 route inside HQledon-Voice-LAN 255.255.255.0 192.168.240.34 1 route outside Bill 255.255.255.0 1.1.1.225 1 route outside Yo 255.255.255.0 1.1.1.225 1 route inside 192.168.129.0 255.255.255.0 192.168.240.34 1 route outside HQ-LAN 255.255.255.0 1.1.1.225 1 route outside Mid 255.255.255.0 1.1.1.225 1 route outside 192.168.140.0 255.255.255.0 1.1.1.225 1 route outside 192.168.143.0 255.255.255.0 1.1.1.225 1 route outside 192.168.144.0 255.255.255.0 1.1.1.225 1 route outside 192.168.149.0 255.255.255.0 1.1.1.225 1 route outside 192.168.152.0 255.255.255.0 1.1.1.225 1 route outside 192.168.153.0 255.255.255.0 1.1.1.225 1 route outside North-Office-LAN 255.255.255.0 1.1.1.225 1 route outside 192.168.156.0 255.255.255.0 1.1.1.225 1 route outside 192.168.157.0 255.255.255.0 1.1.1.225 1 route outside 192.168.159.0 255.255.255.0 1.1.1.225 1 route outside 192.168.160.0 255.255.255.0 1.1.1.225 1 route outside 192.168.161.0 255.255.255.0 1.1.1.225 1 route outside 192.168.162.0 255.255.255.0 1.1.1.225 1 route outside 192.168.163.0 255.255.255.0 1.1.1.225 1 route outside 192.168.165.0 255.255.255.0 1.1.1.225 1 route outside 192.168.166.0 255.255.255.0 1.1.1.225 1 route outside 192.168.167.0 255.255.255.0 1.1.1.225 1 route outside 192.168.168.0 255.255.255.0 1.1.1.225 1 route outside 192.168.173.0 255.255.255.0 1.1.1.225 1 route outside 192.168.174.0 255.255.255.0 1.1.1.225 1 route outside 192.168.175.0 255.255.255.0 1.1.1.225 1 route outside 192.168.99.0 255.255.255.0 1.1.1.225 1 route inside ASA_LAN 255.255.255.0 192.168.240.34 1 route inside 192.168.124.0 255.255.255.0 192.168.240.34 1 route inside 192.168.50.0 255.255.255.0 192.168.240.34 1 route inside 192.168.51.0 255.255.255.128 192.168.240.34 1 route inside 192.168.240.0 255.255.255.224 192.168.240.34 1 route inside 192.168.240.164 255.255.255.224 192.168.240.34 1 route inside 192.168.240.196 255.255.255.224 192.168.240.34 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server vpn protocol radius max-failed-attempts 5 aaa-server vpn (inside) host 192.168.X.2 timeout 60 key a5a53r3t authentication-port 1812 radius-common-pw a5a53r3t aaa authentication ssh console LOCAL aaa authentication http console LOCAL http server enable http 0.0.0.0 0.0.0.0 inside http 1.1.1.2 255.255.255.255 outside http 1.1.1.234 255.255.255.255 outside http 0.0.0.0 0.0.0.0 management http 1.1.100.198 255.255.255.255 outside http 0.0.0.0 0.0.0.0 outside crypto map FW_Outside_map 1 match address Bill crypto map FW_Outside_map 1 set peer x.x.x.121 crypto map FW_Outside_map 1 set transform-set SECURE crypto map FW_Outside_map 2 match address Bo crypto map FW_Outside_map 2 set peer x.x.x.202 crypto map FW_Outside_map 2 set transform-set SECURE crypto map FW_Outside_map 3 match address ASP-Live crypto map FW_Outside_map 3 set peer x.x.x.113 crypto map FW_Outside_map 3 set transform-set SECURE crypto map FW_Outside_map 4 match address Car crypto map FW_Outside_map 4 set peer x.x.x.205 crypto map FW_Outside_map 4 set transform-set SECURE crypto map FW_Outside_map 5 match address Old_HQ crypto map FW_Outside_map 5 set peer x.x.x.2 crypto map FW_Outside_map 5 set transform-set SECURE WG crypto map FW_Outside_map 6 match address Che crypto map FW_Outside_map 6 set peer x.x.x.204 crypto map FW_Outside_map 6 set transform-set SECURE crypto map FW_Outside_map 7 match address Chi crypto map FW_Outside_map 7 set peer x.x.x.212 crypto map FW_Outside_map 7 set transform-set SECURE crypto map FW_Outside_map 8 match address Cla crypto map FW_Outside_map 8 set peer x.x.x.215 crypto map FW_Outside_map 8 set transform-set SECURE crypto map FW_Outside_map 9 match address Eas crypto map FW_Outside_map 9 set peer x.x.x.247 crypto map FW_Outside_map 9 set transform-set SECURE crypto map FW_Outside_map 10 match address Ess crypto map FW_Outside_map 10 set peer x.x.x.170 crypto map FW_Outside_map 10 set transform-set SECURE crypto map FW_Outside_map 11 match address Hud crypto map FW_Outside_map 11 set peer x.x.x.8 crypto map FW_Outside_map 11 set transform-set SECURE crypto map FW_Outside_map 12 match address Gat crypto map FW_Outside_map 12 set peer x.x.x.212 crypto map FW_Outside_map 12 set transform-set SECURE crypto map FW_Outside_map 13 match address Ken crypto map FW_Outside_map 13 set peer x.x.x.230 crypto map FW_Outside_map 13 set transform-set SECURE crypto map FW_Outside_map 14 match address She crypto map FW_Outside_map 14 set peer x.x.x.24 crypto map FW_Outside_map 14 set transform-set SECURE crypto map FW_Outside_map 15 match address North-Office crypto map FW_Outside_map 15 set peer x.x.x.94 crypto map FW_Outside_map 15 set transform-set SECURE crypto map FW_Outside_map 16 match address outside_cryptomap_16 crypto map FW_Outside_map 16 set peer x.x.x.134 crypto map FW_Outside_map 16 set transform-set SECURE crypto map FW_Outside_map 16 set security-association lifetime seconds crypto map FW_Outside_map 17 match address Lit crypto map FW_Outside_map 17 set peer x.x.x.110 crypto map FW_Outside_map 17 set transform-set SECURE crypto map FW_Outside_map 18 match address Mid crypto map FW_Outside_map 18 set peer 78.x.x.110 crypto map FW_Outside_map 18 set transform-set SECURE crypto map FW_Outside_map 19 match address Sp crypto map FW_Outside_map 19 set peer x.x.x.47 crypto map FW_Outside_map 19 set transform-set SECURE crypto map FW_Outside_map 20 match address Tor crypto map FW_Outside_map 20 set peer x.x.x.184 crypto map FW_Outside_map 20 set transform-set SECURE crypto map FW_Outside_map 21 match address Tr crypto map FW_Outside_map 21 set peer x.x.x.75 crypto map FW_Outside_map 21 set transform-set SECURE crypto map FW_Outside_map 22 match address Yo crypto map FW_Outside_map 22 set peer x.x.x.40 crypto map FW_Outside_map 22 set transform-set SECURE crypto map FW_Outside_map 23 match address Tra crypto map FW_Outside_map 23 set peer x.x.x.145 crypto map FW_Outside_map 23 set transform-set SECURE crypto map FW_Outside_map 24 match address outside_cryptomap_24 crypto map FW_Outside_map 24 set peer x.x.x.46 crypto map FW_Outside_map 24 set transform-set SECURE crypto map FW_Outside_map 24 set security-association lifetime seconds crypto map FW_Outside_map 25 match address Nor crypto map FW_Outside_map 25 set peer x.x.x.70 crypto map FW_Outside_map 25 set transform-set SECURE crypto map FW_Outside_map 26 match address Ilk crypto map FW_Outside_map 26 set peer x.x.x.65 crypto map FW_Outside_map 26 set transform-set SECURE crypto map FW_Outside_map 27 match address Nor crypto map FW_Outside_map 27 set peer x.x.x.240 crypto map FW_Outside_map 27 set transform-set SECURE crypto map FW_Outside_map 28 match address ST crypto map FW_Outside_map 28 set peer x.x.x.163 crypto map FW_Outside_map 28 set transform-set SECURE crypto map FW_Outside_map 28 set security-association lifetime seconds crypto map FW_Outside_map 28 set security-association lifetime kilobytes crypto map FW_Outside_map 29 match address Lei crypto map FW_Outside_map 29 set peer x.x.x.4 crypto map FW_Outside_map 29 set transform-set SECURE crypto map FW_Outside_map 30 match address outside_cryptomap_30 crypto map FW_Outside_map 30 set peer x.x.x.34 crypto map FW_Outside_map 30 set transform-set SECURE crypto map FW_Outside_map 31 match address outside_31_cryptomap crypto map FW_Outside_map 31 set pfs crypto map FW_Outside_map 31 set peer Cisco-admin-Peer crypto map FW_Outside_map 31 set transform-set ESP-AES-256-SHA crypto map FW_Outside_map 32 match address outside_32_cryptomap crypto map FW_Outside_map 32 set pfs crypto map FW_Outside_map 32 set peer HQ-SDSL-Peer crypto map FW_Outside_map 32 set transform-set ESP-AES-256-SHA crypto map FW_Outside_map 34 match address outside_cryptomap_34 crypto map FW_Outside_map 34 set peer x.x.x.246 crypto map FW_Outside_map 34 set transform-set ESP-AES-128-SHA ESP-AES-192-SHA ESP-AES-256-SHA crypto map FW_Outside_map 65535 ipsec-isakmp dynamic dynmap crypto map FW_Outside_map interface outside crypto map FW_outside_map 31 set peer x.x.x.45 crypto isakmp identity address crypto isakmp enable outside crypto isakmp policy 9 webvpn enable outside svc enable group-policy ASA-LAN-VPN internal group-policy ASA_LAN-VPN attributes wins-server value 192.168.x.1 192.168.x.2 dns-server value 192.168.x.1 192.168.x.2 vpn-tunnel-protocol IPSec svc split-tunnel-policy tunnelspecified split-tunnel-network-list value Split-Tunnel-ACL default-domain value MYdomain username xxxxxxxxxx password privilege 15 tunnel-group DefaultRAGroup ipsec-attributes isakmp keepalive threshold 30 retry 2 tunnel-group DefaultWEBVPNGroup ipsec-attributes isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.121 type ipsec-l2l tunnel-group x.x.x..121 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.202 type ipsec-l2l tunnel-group x.x.x.202 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.113 type ipsec-l2l tunnel-group x.x.x.113 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.205 type ipsec-l2l tunnel-group x.x.x.205 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.204 type ipsec-l2l tunnel-group x.x.x.204 ipsec-attributes pre-shared-key * isakmp keepalive threshold 30 retry 2 tunnel-group x.x.x.212 type ipsec-l2l tunnel-group x.x.x.212 ipsec-attributes pre-shared-key * tunnel-group x.x.x.215 type ipsec-l2l tunnel-group x.x.x.215 ipsec-attributes pre-shared-key * tunnel-group x.x.x.247 type ipsec-l2l tunnel-group x.x.x.247 ipsec-attributes pre-shared-key * tunnel-group x.x.x.170 type ipsec-l2l tunnel-group x.x.x.170 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x..8 type ipsec-l2l tunnel-group x.x.x.8 ipsec-attributes pre-shared-key * tunnel-group x.x.x.212 type ipsec-l2l tunnel-group x.x.x.212 ipsec-attributes pre-shared-key * tunnel-group x.x.x.230 type ipsec-l2l tunnel-group x.x.x.230 ipsec-attributes pre-shared-key * tunnel-group x.x.x.24 type ipsec-l2l tunnel-group x.x.x.24 ipsec-attributes pre-shared-key * tunnel-group x.x.x.46 type ipsec-l2l tunnel-group x.x.x.46 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.4 type ipsec-l2l tunnel-group x.x.x.4 ipsec-attributes pre-shared-key * tunnel-group x.x.x.110 type ipsec-l2l tunnel-group x.x.x.110 ipsec-attributes pre-shared-key * tunnel-group 78.x.x.110 type ipsec-l2l tunnel-group 78.x.x.110 ipsec-attributes pre-shared-key * tunnel-group x.x.x.47 type ipsec-l2l tunnel-group x.x.x.47 ipsec-attributes pre-shared-key * tunnel-group x.x.x.34 type ipsec-l2l tunnel-group x.x.x.34 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x..129 type ipsec-l2l tunnel-group x.x.x.129 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.94 type ipsec-l2l tunnel-group x.x.x.94 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.40 type ipsec-l2l tunnel-group x.x.x.40 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.65 type ipsec-l2l tunnel-group x.x.x.65 ipsec-attributes pre-shared-key * tunnel-group x.x.x.70 type ipsec-l2l tunnel-group x.x.x.70 ipsec-attributes pre-shared-key * tunnel-group x.x.x.134 type ipsec-l2l tunnel-group x.x.x.134 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.163 type ipsec-l2l tunnel-group x.x.x.163 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.2 type ipsec-l2l tunnel-group x.x.x.2 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group ASA-LAN-VPN type remote-access tunnel-group ASA-LAN-VPN general-attributes address-pool RAS-VPN authentication-server-group vpn authentication-server-group (outside) vpn default-group-policy ASA-LAN-VPN tunnel-group ASA-LAN-VPN ipsec-attributes pre-shared-key * tunnel-group x.x.x.184 type ipsec-l2l tunnel-group x.x.x.184 ipsec-attributes pre-shared-key * tunnel-group x.x.x.145 type ipsec-l2l tunnel-group x.x.x.145 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.75 type ipsec-l2l tunnel-group x.x.x.75 ipsec-attributes pre-shared-key * tunnel-group x.x.x.246 type ipsec-l2l tunnel-group x.x.x.246 ipsec-attributes pre-shared-key * isakmp keepalive disable tunnel-group x.x.x.2 type ipsec-l2l tunnel-group x.x.x..2 ipsec-attributes pre-shared-key * tunnel-group x.x.x.98 type ipsec-l2l tunnel-group x.x.x.98 ipsec-attributes pre-shared-key * ! ! ! policy-map global_policy description Netflow class class-default flow-export event-type all destination MS-ISA-Server policy-map type inspect dns migrated_dns_map_1 parameters message-length maximum 512 Anyone have a clue because Im on the verge of going postal.....

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >