Search Results

Search found 16568 results on 663 pages for 'warning level'.

Page 113/663 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • BAD DC transfering FSMO Roles to ADC

    - by Suleman
    I have a DC (FQDN:server.icmcpk.local) and an ADC (FQDN:file-server.icmcpk.local). Recently my DC is facing a bad sector problem so I changed the Operation Masters to file-server for all five roles. but when ever i turn off the OLD DC the file-server also stops wroking with AD and GPMC further i m also unable to join any other computer to this domain. For Test purpose i also added a new ADC (FQDN:wds-server.icmcpk.local) but no succes with the old DC off i had to turn the old DC on and then joined it. I m attaching the Dcdiags for all three servers. Kindly help me so that i b able to reinstall new HDD and it can go online again. --------------------------------------- Server --------------------------------------- C:\Program Files\Support Tools>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SERVER Starting test: Connectivity ......................... SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SERVER Starting test: Replications [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. ......................... SERVER passed test Replications Starting test: NCSecDesc ......................... SERVER passed test NCSecDesc Starting test: NetLogons ......................... SERVER passed test NetLogons Starting test: Advertising ......................... SERVER passed test Advertising Starting test: KnowsOfRoleHolders ......................... SERVER passed test KnowsOfRoleHolders Starting test: RidManager ......................... SERVER passed test RidManager Starting test: MachineAccount ......................... SERVER passed test MachineAccount Starting test: Services ......................... SERVER passed test Services Starting test: ObjectsReplicated ......................... SERVER passed test ObjectsReplicated Starting test: frssysvol ......................... SERVER passed test frssysvol Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... SERVER failed test frsevent Starting test: kccevent ......................... SERVER passed test kccevent Starting test: systemlog An Error Event occured. EventID: 0x80001778 Time Generated: 05/04/2012 14:05:39 Event String: The previous system shutdown at 1:26:31 PM on An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:07:45 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:13:40 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:38 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:22:57 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for ......................... SERVER failed test systemlog Starting test: VerifyReferences ......................... SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : icmcpk Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Running enterprise tests on : icmcpk.local Starting test: Intersite ......................... icmcpk.local passed test Intersite Starting test: FsmoCheck ......................... icmcpk.local passed test FsmoCheck ---------------------- File-Server ---------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = FILE-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Connectivity ......................... FILE-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach FILE-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... FILE-SERVER failed test Advertising Starting test: FrsEvent ......................... FILE-SERVER passed test FrsEvent Starting test: DFSREvent ......................... FILE-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... FILE-SERVER passed test SysVolCheck Starting test: KccEvent ......................... FILE-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... FILE-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... FILE-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... FILE-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\FILE-SERVER\netlogon) [FILE-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... FILE-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... FILE-SERVER passed test ObjectsReplicated Starting test: Replications ......................... FILE-SERVER passed test Replications Starting test: RidManager ......................... FILE-SERVER passed test RidManager Starting test: Services ......................... FILE-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x00000469 Time Generated: 05/04/2012 14:01:10 Event String: The processing of Group Policy failed because of lack of network con nectivity to a domain controller. This may be a transient condition. A success m essage would be generated once the machine gets connected to the domain controll er and Group Policy has succesfully processed. If you do not see a success messa ge for several hours, then contact your administrator. An Warning Event occurred. EventID: 0x8000A001 Time Generated: 05/04/2012 14:07:11 Event String: The Security System could not establish a secured connection with th e server ldap/icmcpk.local/[email protected]. No authentication protocol was available. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:34 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:36 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. ......................... FILE-SERVER failed test SystemLog Starting test: VerifyReferences ......................... FILE-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite --------------------- WDS-Server --------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = WDS-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Connectivity ......................... WDS-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach WDS-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... WDS-SERVER failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... WDS-SERVER passed test FrsEvent Starting test: DFSREvent ......................... WDS-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... WDS-SERVER passed test SysVolCheck Starting test: KccEvent ......................... WDS-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... WDS-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... WDS-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... WDS-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\WDS-SERVER\netlogon) [WDS-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... WDS-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... WDS-SERVER passed test ObjectsReplicated Starting test: Replications ......................... WDS-SERVER passed test Replications Starting test: RidManager ......................... WDS-SERVER passed test RidManager Starting test: Services ......................... WDS-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:02:55 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:08:33 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. ......................... WDS-SERVER failed test SystemLog Starting test: VerifyReferences ......................... WDS-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Mail not being sent when using phpmailer

    - by mithunmo
    I am using the following code to send mail to a smtp server . <?php // example on using PHPMailer with GMAIL include("PHPMailer/class.phpmailer.php"); include("PHPMailer/class.smtp.php"); // note, this is optional - gets called from main class if not already loaded $mail = new PHPMailer(); $mail->IsSMTP(); $mail->SMTPAuth = true; // enable SMTP authentication $mail->Host = "mucse491.eu.xxxxx.com"; // sets GMAIL as the SMTP server $mail->Port = 143; // set the SMTP port $mail->Username = "[email protected]"; // GMAIL username $mail->Password = "xxxx"; // GMAIL password $mail->From = "[email protected]"; $mail->FromName = "mithun"; $mail->Subject = "This is the subject"; $mail->AltBody = "This is the body when user views in plain text format"; //Text Body $mail->WordWrap = 50; // set word wrap $mail->AddAddress("[email protected]","First Last"); $mail->IsHTML(true); // send as HTML if(!$mail->Send()) { echo "Mailer Error: " . $mail->ErrorInfo; } else { echo "Message has been sent"; } ?> When I run it from the command line I get the following error PHP Deprecated: Function eregi() is deprecated in C:\wamp\www\phpmailer\class.p hpmailer.php on line 593 Deprecated: Function eregi() is deprecated in C:\wamp\www\phpmailer\class.phpmai ler.php on line 593 PHP Warning: fputs() expects parameter 1 to be resource, integer given in C:\wa mp\www\phpmailer\class.smtp.php on line 213 Warning: fputs() expects parameter 1 to be resource, integer given in C:\wamp\ww w\phpmailer\class.smtp.php on line 213 Mailer Error: SMTP Error: Could not connect to SMTP host. When i run from the browser I get the following error Deprecated: Function eregi() is deprecated in C:\wamp\www\phpmailer\class.phpmailer.php on line 593 Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server. in C:\wamp\www\phpmailer\class.smtp.php on line 122 Warning: fsockopen() [function.fsockopen]: unable to connect to mucse491.xx.xxxxx.com:143 (php_network_getaddresses: getaddrinfo failed: This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server. ) in C:\wamp\www\phpmailer\class.smtp.php on line 122 Mailer Error: SMTP Error: Could not connect to SMTP host. Please someone guide me .

    Read the article

  • NUnit.Framework.Assert.IsInstanceOfType() is obsolete

    - by Malice
    I'm currently reading the book Professional Enterprise .NET and I've noticed this warning in some of the example programs: 'NUnit.Framework.Assert.IsInstanceOfType(System.Type, object)' is obsolete Now I may have already answered my own question but, to fix this warning is it simply a case of replacing Assert.IsInstanceOfType() with Assert.IsInstanceOf()? For example this: Assert.IsInstanceOfType(typeof(ClassName), variableName); would become: Assert.IsInstanceOf(typeof(ClassName), variableName);

    Read the article

  • Silverlight Faults

    - by Bram
    I;m trying to get WCF Silverlight faults working as per this : MSDN aricle After adding the SL fault to my Web.config file I get the following warning: The element 'behavior' has invalid child element 'silverlightFaults'. List of possible elements expected: 'serviceAuthorization, serviceCredentials, serviceMetadata, serviceSecurityAudit, serviceThrottling, dataContractSerializer, serviceDebug, serviceTimeouts, persistenceProvider, workflowRuntime'. Ignoring the warning doesn't work and my Silverlight application cannot add the WCF service. Any ideas?

    Read the article

  • How do you get Matlab to write the BOM (byte order markers) for UTF-16 text files?

    - by Richard Povinelli
    I am creating UTF16 text files with Matlab, which I am later reading in using Java. In Matlab, I open a file called fileName and write to it as follows: fid = fopen(fileName, 'w','n','UTF16-LE'); fprintf(fid,"Some stuff."); In Java, I can read the text file using the following code: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16LE"); String s = scanner.nextLine(); Here is the hex output: Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 00000000 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 s.o.m.e. .s.t.u.f.f. The above approach works fine. But, I want to be able to write out the file using UTF16 with a BOM to give me more flexibility so that I don't have to worry about big or little endian. In Matlab, I've coded: fid = fopen(fileName, 'w','n','UTF16'); fprintf(fid,"Some stuff."); In Java, I change the code to: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16"); String s = scanner.nextLine(); In this case, the string s is garbled, because Matlab is not writing the BOM. I can get the Java code to work just fine if I add the BOM manually. With the added BOM, the following file works fine. Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 00000000 FF FE 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 ÿþs.o.m.e. .s.t.u.f.f. How can I get Matlab to write out the BOM? I know I could write the BOM out separately, but I'd rather have Matlab do it automatically. Addendum I selected the answer below from Amro because it exactly solves the question I posed. One key discovery for me was the difference between the Unicode Standard and a UTF (Unicode transformation format) (see http://unicode.org/faq/utf_bom.html). The Unicode Standard provides unique identifiers (code points) for characters. UTFs provide mappings of every code point "to a unique byte sequence." Since all but a handful of the characters I am using are in the first 128 code points, I'm going to switch to using UTF-8 as Romeo suggests. UTF-8 is supported by Matlab (The warning shown below won't need to be suppressed.) and Java, and for my application will generate smaller text files. I suppress the Matlab warning Warning: The encoding 'UTF-16LE' is not supported. with warning off MATLAB:iofun:UnsupportedEncoding;

    Read the article

  • How to write a buffer-overflow exploit in windows XP,x86?

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes. My gdb dump as called: Dump of assembler code for function main: 0x004012ee <main+0>: push %ebp 0x004012ef <main+1>: mov %esp,%ebp 0x004012f1 <main+3>: sub $0x18,%esp 0x004012f4 <main+6>: and $0xfffffff0,%esp 0x004012f7 <main+9>: mov $0x0,%eax 0x004012fc <main+14>: add $0xf,%eax 0x004012ff <main+17>: add $0xf,%eax 0x00401302 <main+20>: shr $0x4,%eax 0x00401305 <main+23>: shl $0x4,%eax 0x00401308 <main+26>: mov %eax,0xfffffff8(%ebp) 0x0040130b <main+29>: mov 0xfffffff8(%ebp),%eax 0x0040130e <main+32>: call 0x401b00 <_alloca> 0x00401313 <main+37>: call 0x4017b0 <__main> 0x00401318 <main+42>: movl $0x0,0xfffffffc(%ebp) 0x0040131f <main+49>: movl $0x3,0x8(%esp) 0x00401327 <main+57>: movl $0x2,0x4(%esp) 0x0040132f <main+65>: movl $0x1,(%esp) 0x00401336 <main+72>: call 0x4012d0 <function> 0x0040133b <main+77>: movl $0x1,0xfffffffc(%ebp) 0x00401342 <main+84>: mov 0xfffffffc(%ebp),%eax 0x00401345 <main+87>: mov %eax,0x4(%esp) 0x00401349 <main+91>: movl $0x403000,(%esp) 0x00401350 <main+98>: call 0x401b60 <printf> 0x00401355 <main+103>: leave 0x00401356 <main+104>: ret 0x00401357 <main+105>: nop 0x00401358 <main+106>: add %al,(%eax) 0x0040135a <main+108>: add %al,(%eax) 0x0040135c <main+110>: add %al,(%eax) 0x0040135e <main+112>: add %al,(%eax) End of assembler dump. Dump of assembler code for function function: 0x004012d0 <function+0>: push %ebp 0x004012d1 <function+1>: mov %esp,%ebp 0x004012d3 <function+3>: sub $0x38,%esp 0x004012d6 <function+6>: lea 0xffffffe8(%ebp),%eax 0x004012d9 <function+9>: add $0xc,%eax 0x004012dc <function+12>: mov %eax,0xffffffd4(%ebp) 0x004012df <function+15>: mov 0xffffffd4(%ebp),%edx 0x004012e2 <function+18>: mov 0xffffffd4(%ebp),%eax 0x004012e5 <function+21>: movzbl (%eax),%eax 0x004012e8 <function+24>: add $0x5,%al 0x004012ea <function+26>: mov %al,(%edx) 0x004012ec <function+28>: leave 0x004012ed <function+29>: ret In my case the distance should be - = 5,right?But it seems not working..

    Read the article

  • How to write a buffer-overflow exploit in GCC,windows XP,x86?

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes. My gdb dump as called: Dump of assembler code for function main: 0x004012ee <main+0>: push %ebp 0x004012ef <main+1>: mov %esp,%ebp 0x004012f1 <main+3>: sub $0x18,%esp 0x004012f4 <main+6>: and $0xfffffff0,%esp 0x004012f7 <main+9>: mov $0x0,%eax 0x004012fc <main+14>: add $0xf,%eax 0x004012ff <main+17>: add $0xf,%eax 0x00401302 <main+20>: shr $0x4,%eax 0x00401305 <main+23>: shl $0x4,%eax 0x00401308 <main+26>: mov %eax,0xfffffff8(%ebp) 0x0040130b <main+29>: mov 0xfffffff8(%ebp),%eax 0x0040130e <main+32>: call 0x401b00 <_alloca> 0x00401313 <main+37>: call 0x4017b0 <__main> 0x00401318 <main+42>: movl $0x0,0xfffffffc(%ebp) 0x0040131f <main+49>: movl $0x3,0x8(%esp) 0x00401327 <main+57>: movl $0x2,0x4(%esp) 0x0040132f <main+65>: movl $0x1,(%esp) 0x00401336 <main+72>: call 0x4012d0 <function> 0x0040133b <main+77>: movl $0x1,0xfffffffc(%ebp) 0x00401342 <main+84>: mov 0xfffffffc(%ebp),%eax 0x00401345 <main+87>: mov %eax,0x4(%esp) 0x00401349 <main+91>: movl $0x403000,(%esp) 0x00401350 <main+98>: call 0x401b60 <printf> 0x00401355 <main+103>: leave 0x00401356 <main+104>: ret 0x00401357 <main+105>: nop 0x00401358 <main+106>: add %al,(%eax) 0x0040135a <main+108>: add %al,(%eax) 0x0040135c <main+110>: add %al,(%eax) 0x0040135e <main+112>: add %al,(%eax) End of assembler dump. Dump of assembler code for function function: 0x004012d0 <function+0>: push %ebp 0x004012d1 <function+1>: mov %esp,%ebp 0x004012d3 <function+3>: sub $0x38,%esp 0x004012d6 <function+6>: lea 0xffffffe8(%ebp),%eax 0x004012d9 <function+9>: add $0xc,%eax 0x004012dc <function+12>: mov %eax,0xffffffd4(%ebp) 0x004012df <function+15>: mov 0xffffffd4(%ebp),%edx 0x004012e2 <function+18>: mov 0xffffffd4(%ebp),%eax 0x004012e5 <function+21>: movzbl (%eax),%eax 0x004012e8 <function+24>: add $0x5,%al 0x004012ea <function+26>: mov %al,(%edx) 0x004012ec <function+28>: leave 0x004012ed <function+29>: ret In my case the distance should be - = 5,right?But it seems not working.. Why function needs 56 bytes for local variables?( sub $0x38,%esp )

    Read the article

  • PHP5.3 is not working with MySQL5.1 IIS7 Times out

    - by Thorn007
    I have set up PHP5.3, MySQL5.1, and IIS7 on Window 7 but php doesn't want to work with MySQL. I'm assuming it is a configuration error or an incomplete install on my part. MySQL5.1 is working PHP5.3 is working, phpinfo() shows info and that i have enabled MySQL IIS is setup and using fastCgiModule to run PHP IIS registers php.ini updates port 3306 is firewall free and open to the world php.ini is configured correctly I have added c:\php to the Windows systems PATH In the past I remember moving a file, libmysql.dll, to System32 but I doesn't look like that come with php5.3.1, as the driver comes built in now http://us3.php.net/manual/en/mysqlnd.install.php. (This has been giving me so much trouble I have been documenting my findings on my blog as http://inteldesigner.com/2010/code/having-problems-getting-php5-3-to-work-with-mysql5-1 ) NEED: I need to install PHP manually, don't want to use the quick installer or an older version I need to get PHP5.3 to work with MySQL5.1 so i can install Wordpress2.9 and Drupal7a Any links or suggestion would be great, I have already done everything on the iis web site, nothing is working. I'm guessing they have not updated for new software. BUGS/SOLUTION: The solution is here: http://bugs.php.net/bug.php?id=50172 thanks go to don.raman on the iis.net forums http://forums.iis.net/p/1164911/1933894.aspx SYMPTOMS: The php function mysql_connect() in conjunction with php5.3 locks up sever and returns error 500. (IPv6 is the problem see above link) TEST CODE: <?php $con = mysql_connect("localhost","root","***"); if (!$con) { die('Could not connect: ' . mysql_error()); } // some code mysql_close($con); ?> ERRORS: From Browser: HTTP Error 500.0 - Internal Server Error C:\php\php-cgi.exe - The FastCGI process exceeded configured activity timeout When i run php -f c:\public_html\index.php from the command line i got: PHP Warning: mysql_connect(): [2002] A connection attempt failed because the co nnected party did not (trying to connect via tcp://localhost:3306) in C:\public _html\index.php on line 10 Warning: mysql_connect(): [2002] A connection attempt failed because the connect ed party did not (trying to connect via tcp://localhost:3306) in C:\public_html \index.php on line 10 PHP Warning: mysql_connect(): A connection attempt failed because the connected party did not properly respond after a period of time, or established connectio n failed because connected host has failed to respond. in C:\public_html\index.php on line 10 Warning: mysql_connect(): A connection attempt failed because the connected part y did not properly respond after a period of time, or established connection fai led because connected host has failed to respond. in C:\public_html\index.php on line 10 Could not connect: A connection attempt failed because the connected party did n ot properly respond after a period of time, or established connection failed bec ause connected host has failed to respond. C:\Users\Kevin>

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • php Undefined variable: article_id

    - by mmcgrail
    I keep getting this as a warning but but I don't care if its not set so how do I get rid of the warning and still achieve my goal here is the context var url_items = array("foo"); $article_id = db_escape($url_items[1]); $article = get_article($article_id); function get_article($article_id = NULL) {.....}

    Read the article

  • Compile error with acml.h

    - by Aslan986
    I'm having some problem compiling a Visual Studio 2008 project in which I use acml.h. I correctly installed amcl package on my computer, version 5.1.0-ifort64. I added the reference to the acml.h directory in Visual Studio Compiler Properties, but when i try to compile i get these errors: 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : error C2144: errore di sintassi: 'char' deve essere preceduto da ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : error C2144: errore di sintassi: 'char' deve essere preceduto da ';' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : warning C4091: '': ignorato a sinistra di 'char' quando non si dichiara alcuna variabile 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : error C2143: errore di sintassi: ';' mancante prima di ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : error C2059: errore di sintassi: ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1510) : error C2059: errore di sintassi: ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : error C2144: errore di sintassi: 'char' deve essere preceduto da ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : error C2144: errore di sintassi: 'char' deve essere preceduto da ';' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : warning C4091: '': ignorato a sinistra di 'char' quando non si dichiara alcuna variabile 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : error C2143: errore di sintassi: ';' mancante prima di ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : error C2059: errore di sintassi: ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(1675) : error C2059: errore di sintassi: ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : error C2144: errore di sintassi: 'char' deve essere preceduto da ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : error C2144: errore di sintassi: 'char' deve essere preceduto da ';' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : warning C4091: '': ignorato a sinistra di 'char' quando non si dichiara alcuna variabile 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : error C2143: errore di sintassi: ';' mancante prima di ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : error C2059: errore di sintassi: ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3511) : error C2059: errore di sintassi: ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : error C2144: errore di sintassi: 'char' deve essere preceduto da ')' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : error C2144: errore di sintassi: 'char' deve essere preceduto da ';' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : warning C4091: '': ignorato a sinistra di 'char' quando non si dichiara alcuna variabile 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : error C2143: errore di sintassi: ';' mancante prima di ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : error C2059: errore di sintassi: ',' 1>C:\AMD\acml5.1.0\ifort64_mp\include\acml.h(3676) : error C2059: errore di sintassi: ')' I apologize for italian language; however they are all sintax errors. How can I fix it? Can someone help me?

    Read the article

  • Quoting of decimal values

    - by Peter
    I am storing a running total in a Decimal(10,2) field and adding to it as items are processed. update foo set bar = bar + '3.15' About 20% of the times a warning is issued "Data truncated for column 'bar' at row 4" This warning is never issued if the update value is not quoted. Should decimal values be quoted?

    Read the article

  • Getting syntax errors when building with python 3.1 and not with python 2.6

    - by mathan
    Hi All, Using Mac os X 10.6, python 3.1 and gcc 4.0 trying to build pyfsevent implemented in python2.6 by converting it to python 3.1. Wondering when getting the following errors while building in python 3.1 alone. Why not when building with python 2.6? error: syntax error before ‘CFFileDescriptorRef’ warning: no semicolon at end of struct or union error: syntax error before ‘}’ token warning: data definition has no type or storage clas

    Read the article

  • Count warnings in Python 2.4

    - by l0b0
    I've got some tests that need to count the number of warnings raised by a function. In Python 2.6 this is simple, using with warnings.catch_warnings(record=True) as warn: ... self.assertEquals(len(warn), 2) Unfortunately, with is not available in Python 2.4, so what else could I use? I can't simply check if there's been a single warning (using warning filter with action='error' and try/catch), because the number of warnings is significant.

    Read the article

  • getaddrinfo is not statically compiled

    - by bobby
    I have included the header netdb.h, where getaddrinfo is included, but gcc issues this warning: warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking gcc -m32 -static -s -O2 -std=c99 -D_POSIX_C_SOURCE=200112L myprogram.c How can I statically compile whatever file is missing ?

    Read the article

  • range of line numbers in if condition C programming

    - by nour
    Hello, I'm working on a simple C prorgam, and i'ms tuck with an if test: int line_number = 0; if ((line_number >= argv[2]) && (line_number <= argv[4]) ) gcc says: cp.c:25: warning: comparison between pointer and integer cp.c:25: warning: comparison between pointer and integer What can I do to properly write the range of line I want to deal with ? Thank you!

    Read the article

  • Compilation errors for a c api

    - by sam
    What would be the reason for the following errors though the syntax was right and I have included the coreservices framework in which some data type and constants are declared. " c.c:22: error: syntax error before ‘CFFileDescriptorRef’ c.c:22: warning: no semicolon at end of struct or union c.c:24: error: syntax error before ‘}’ token c.c:24: warning: data definition has no type or storage class lipo: can't figure out the architecture type of: /var/folders/fF/fFgga6+-E48RL+iXKLFmAE+++TI/-Tmp-//ccFzQIAj.out "

    Read the article

  • In Haskell, what does it mean if a binding "shadows an existing binding"?

    - by Alistair
    I'm getting a warning from GHC when I compile: Warning: This binding for 'pats' shadows an existing binding in the definition of 'match_ignore_ancs' Here's the function: match_ignore_ancs (TextPat _ c) (Text t) = c t match_ignore_ancs (TextPat _ _) (Element _ _ _) = False match_ignore_ancs (ElemPat _ _ _) (Text t) = False match_ignore_ancs (ElemPat _ c pats) (Element t avs xs) = c t avs && match_pats pats xs Any idea what this means and how I can fix it? Cheers.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >