Search Results

Search found 2118 results on 85 pages for 'trust metric'.

Page 79/85 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • Sending notification after an event has remained open for a specified period

    - by Loc Nhan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager (EM) 12c allows you to create an incident rule to send a notification and/or create an incident after an event has been open for a specified period. Such an incident rule will help prevent premature alerts on issues that may correct themselves within a certain amount of time. For example, there are some agents in an unstable network area, and often there are communication failures between the agents and the OMS lasting three, four minutes at a time. In this scenario, you may only want to receive alerts after an agent in that area has been in the Agent Unreachable status for at least five minutes. Note: Many non-target availability metrics allow users to specify the “number of occurrences” or the number of consecutive times metric values reach thresholds before a notification is sent. It is best to use the feature for such metrics. This article provides a step-by-step guide for creating an incident rule set to cater for the above scenario, that is, to create an incident and send a notification after the Agent Unreachable event has remained open for a five-minute duration. Steps to create the incident rule 1.     Log on to the console and navigate to Setup -> Incidents -> Incident Rules. Note: A non-super user requires the Create Enterprise Rule Set privilege, which is a resource privilege, to create an incident rule. The Incident Rules - All Enterprise Rules page displays. 2.     Click Create Rule Set … The Create Rule Set page displays. 3.     Enter a name for the rule set (e.g. Rule set for agents in flaky network areas), optionally enter a description, and leave everything else at default values, and click + Add. The Search and Select: Targets page pops up. Note:  While you can create a rule set for individual targets, it is a best practice to use a group for this purpose. 4.     Select an appropriate group, e.g. the AgentsInFlakyNework group. The Select button becomes enabled, click the button. The Create Rule Set page displays. 5.     Leave everything at default values, and click the Rules tab. The Create Rule Set page displays. 6.     Click Create… The Select Type of Rule to Create page pops up. 7.     Leave the Incoming events and updates to events option selected, and click Continue. The Create New Rule : Select Events page displays. 8.     Select Target Availability from the Type drop-down list. The page shows more options for Target Availability. 9.     Select the Specific events of type Target Availability option, and click + Add. The Select Target Availability events page pops up. 10.   Select Agent from the Target Type dropdown list. The page expands. 11.   Click the Agent unreachable checkbox, and click OK. Note: If you want to also receive a notification when the event is cleared, click the Agent unreachable end checkbox as well before clicking OK. The Create New Rule : Select Events page displays. 12.   Click Next. The Create New Rule : Add Actions page displays. 13.   Click + Add. The Add Actions page displays. 14.   Do the following: a.     Select the Only execute the actions if specified conditions match option (You don’t want the action to trigger always). The following options appear in the Conditions for Actions section. b.     Select the Event has been open for specified duration option. The Conditions for actions section expands. c.     Change the values of Event has been open for to 5 Minutes as shown below. d.     In the Create Incident or Update Incident section, click the Create Incident checkbox as following: e.     In the Notifications section, enter an appropriate EM user or email address in the E-mail To field. f.     Click Continue (in the top right hand corner). The Create New Rule : Add Actions page displays. 15.   Click Next. The Create New Rule : Specify name and Description page displays. 16.   Enter a rule name, and click Next. The Create New Rule : Review page appears. 17.   Click Continue, and proceed to save the rule set. The incident rule set creation completes. After one of the agents in the group specified in the rule set is stopped for over 5 minutes, EM will send a mail notification and create an incident as shown in the following screenshot. In conclusion, you have seen the steps to create an example incident rule set that only creates an incident and triggers a notification after an event has been open for a specified period. Such an incident rule can help prevent unnecessary incidents and alert notifications leaving EM administrators time to more important tasks. - Loc Nhan

    Read the article

  • ReportBuilder.application fails on my PC - but works on localhost

    - by JayTee
    We're running SQL 2005 on Win2K3 server and are using SSRS. Here's the situation: I can run Report Builder from localhost My coworker can run Report Builder on his Vista computer Another coworker can run Report Builder on his XP SP3 computer (IE7) I can NOT run Report Builder on my XP SP3 computer (IE7) I'm told that it could be anything from an errant registry entry to a group policy problem. Here is what I've tried: Put the site into "Trusted Sites" with "low" security re-install .NET create a new local user account and attempt to run it The results? Every single time, I get a dialog box: "Application cannot be started. Contact the application vendor" I click the details button and get this: PLATFORM VERSION INFO Windows : 5.1.2600.196608 (Win32NT) Common Language Runtime : 2.0.50727.3607 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3607 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application Server : Microsoft-IIS/6.0 X-Powered-By : ASP.NET X-AspNet-Version: 2.0.50727 IDENTITIES Deployment Identity : ReportBuilder.application, Version=9.0.3042.0, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=msil APPLICATION SUMMARY * Online only application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application resulted in exception. Following failure messages were detected: + Value does not fall within the expected range. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [4/7/2010 2:53:57 PM] : Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application has started. * [4/7/2010 2:53:58 PM] : Processing of deployment manifest has successfully completed. ERROR DETAILS Following errors were detected during this operation. * [4/7/2010 2:53:58 PM] System.ArgumentException - Value does not fall within the expected range. - Source: System.Deployment - Stack trace: at System.Deployment.Application.NativeMethods.CorLaunchApplication(UInt32 hostType, String applicationFullName, Int32 manifestPathsCount, String[] manifestPaths, Int32 activationDataCount, String[] activationData, PROCESS_INFORMATION processInformation) at System.Deployment.Application.ComponentStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.SubscriptionStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.Activate(DefinitionAppId appId, AssemblyManifest appManifest, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS * Transaction at [4/7/2010 2:53:58 PM] + System.Deployment.Internal.Isolation.StoreOperationSetDeploymentMetadata - Status: Set - HRESULT: 0x0 + System.Deployment.Internal.Isolation.StoreTransactionOperationType (27) - HRESULT: 0x0 I'm really at a loss. I'm certain there is something on my PC preventing the application from running - but I just don't know what. Google hasn't been much of a help because most problems are related to the server configuration (which I know is correct since it works on other PCs) Help me, Overflow Kenobi, you're my only hope..

    Read the article

  • ASP.NET Web Service Throws 401 (unauthorized) Error

    - by user268611
    Hi Experts, I have this .NET application to be run in an intranet environment. It is configured so that it requires Windows Authentication before you can access the website (Anonymous access is disabled). This website calls a web service (enable anonymous access) and the web service calls the DB. We do have a token-based authentication between the web application and the web service to secure the communication between them. The issue I'm facing is that when I deploy this to production, I'm having an intermittent issue whereby the communication between the web application and the web service failed. The 401 issue was thrown. This is actually working fine in our QA environment. Is this an issue with Active Directory? Or could it be an isssue with FQDN as mentioned here: http://support.microsoft.com/default.aspx?scid=kb;en-us;896861? The weirdest thing is that this is happening intermittently when tested in both on the server itself and a remote workstation in my client's environment. But, this is working perfectly in my environment. OS: Windows Server SP1 IIS 6 .NET 3.5 Framework Any idea about the 401 (Unauthorized) issue?? Thx for the help... This is from the log... Event code: 3005 Event message: An unhandled exception has occurred. Event time: 4/5/2010 10:44:57 AM Event time (UTC): 4/5/2010 2:44:57 AM Event ID: 6c8ea2607b8d4e29a7f0b1c392b1cb21 Event sequence: 155112 Event occurrence: 2 Event detail code: 0 Application information: Application domain: xxx Trust level: Full Application Virtual Path: xxx Application Path: xxx Machine name: xxx Process information: Process ID: 4424 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: WebException Exception message: The request failed with HTTP status 401: Unauthorized. Request information: Request URL: http://[ip]/[app_path] Request path: xxx User host address: [ip] User: xxx Is authenticated: True Authentication Type: Negotiate Thread account name: xxx Thread information: Thread ID: 6 Thread account name: xxx Is impersonating: False Stack trace: at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at wsVulnerabilityAdvisory.Service.test() at test.Page_Load(Object sender, EventArgs e) at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

    Read the article

  • Android Marketplace Error: "The server could not process your apk. Try again."

    - by jdandrea
    I have an updated apk - tested successfully on various devices and simulator instances - with the following manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.myCompany.appName" android:versionCode="2" android:versionName="1.0.1"> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="5" /> <uses-permission android:name="android.permission.INTERNET" /> <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" /> <application android:icon="@drawable/icon" android:label="@string/icon_name" android:debuggable="false"> <activity android:name=".myActivity" android:configChanges="keyboardHidden|orientation"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> When I post to Android Marketplace as an upgrade to my existing 1.0 app, I get the aforementioned ambiguous message: "The server could not process your apk. Try again." I've searched elsewhere for this message in hopes of finding out what might be happening, to no avail. (A popular suggestion is to move the uses-sdk element to the top of the manifest, but as you can see it's already at the top.) Clues welcome/appreciated. Update: I just tried to upload the same file again. Now I get a new message: The new apk's versionCode (2) in AndroidManifest.xml must be higher than the old apk's versionCode (2). The server could not process your apk. Try again. Soooo Marketplace did get my upgraded apk after all? (The very first accepted apk's versionCode was 1, so this update was of course bumped to 2.) Confused … Bumping it up to 3 and trying again. Surprise surprise, I get the original "could not process" error all over again. Going in circles. Hmm ... :( Nuther Update: If I exit and re-enter the Marketplace page, now it shows that the app has been uploaded! Except there's no app icon. Curiouser and curiouser ... and this is all happening with a cache-cleared (standards-friendly) browser to boot. So - do I trust the upload? Or start over ... with versionCode="4"? All I want is to get a solid "Upload successful, here's the icon, ready to publish" type of response.

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • Performing dynamic sorts on EF4 data

    - by Jaxidian
    I'm attempting to perform dynamic sorting of data that I'm putting into grids into our MVC UI. Since MVC is abstracted from everything else via WCF, I've created a couple utility classes and extensions to help with this. The two most important things (slightly simplified) are as follows: public static IQueryable<T> ApplySortOptions<T, TModel, TProperty>(this IQueryable<T> collection, IEnumerable<ISortOption<TModel, TProperty>> sortOptions) where TModel : class { var sortedSortOptions = (from o in sortOptions orderby o.Priority ascending select o).ToList(); var results = collection; foreach (var option in sortedSortOptions) { var currentOption = option; var propertyName = currentOption.Property.MemberWithoutInstance(); var isAscending = currentOption.IsAscending; if (isAscending) { results = from r in results orderby propertyName ascending select r; } else { results = from r in results orderby propertyName descending select r; } } return results; } public interface ISortOption<TModel, TProperty> where TModel : class { Expression<Func<TModel, TProperty>> Property { get; set; } bool IsAscending { get; set; } int Priority { get; set; } } I've not given you the implementation for MemberWithoutInstance() but just trust me in that it returns the name of the property as a string. :-) Following is an example of how I would consume this (using a non-interesting, basic implementation of ISortOption<TModel, TProperty>): var query = from b in CurrentContext.Businesses select b; var sortOptions = new List<ISortOption<Business, object>> { new SortOption<Business, object> { Property = (x => x.Name), IsAscending = true, Priority = 0 } }; var results = query.ApplySortOptions(sortOptions); As I discovered with this question, the problem is specific to my orderby propertyName ascending and orderby propertyName descending lines (everything else works great as far as I can tell). How can I do this in a dynamic/generic way that works properly?

    Read the article

  • Code Golf: Evaluating Mathematical Expressions

    - by Noldorin
    Challenge Here is the challenge (of my own invention, though I wouldn't be surprised if it has previously appeared elsewhere on the web). Write a function that takes a single argument that is a string representation of a simple mathematical expression and evaluates it as a floating point value. A "simple expression" may include any of the following: positive or negative decimal numbers, +, -, *, /, (, ). Expressions use (normal) infix notation. Operators should be evaluated in the order they appear, i.e. not as in BODMAS, though brackets should be correctly observed, of course. The function should return the correct result for any possible expression of this form. However, the function does not have to handle malformed expressions (i.e. ones with bad syntax). Examples of expressions: 1 + 3 / -8 = -0.5 (No BODMAS) 2*3*4*5+99 = 219 4 * (9 - 4) / (2 * 6 - 2) + 8 = 10 1 + ((123 * 3 - 69) / 100) = 4 2.45/8.5*9.27+(5*0.0023) = 2.68... Rules I anticipate some form of "cheating"/craftiness here, so please let me forewarn against it! By cheating, I refer to the use of the eval or equivalent function in dynamic languages such as JavaScript or PHP, or equally compiling and executing code on the fly. (I think my specification of "no BODMAS" has pretty much guaranteed this however.) Apart from that, there are no restrictions. I anticipate a few Regex solutions here, but it would be nice to see more than just that. Now, I'm mainly interested in a C#/.NET solution here, but any other language would be perfectly acceptable too (in particular, F# and Python for the functional/mixed approaches). I haven't yet decided whether I'm going to accept the shortest or most ingenious solution (at least for the language) as the answer, but I would welcome any form of solution in any language, except what I've just prohibited above! My Solution I've now posted my C# solution here (403 chars). Update: My new solution has beaten the old one significantly at 294 chars, with the help of a bit of lovely regex! I suspected that this will get easily beaten by some of the languages out there with lighter syntax (particularly the funcional/dynamic ones), and have been proved right, but I'd be curious if someone could beat this in C# still. Update I've seen some very crafty solutions already. Thanks to everyone who has posted one. Although I haven't tested any of them yet, I'm going to trust people and assume they at least work with all of the given examples. Just for the note, re-entrancy (i.e. thread-safety) is not a requirement for the function, though it is a bonus. Format Please post all answers in the following format for the purpose of easy comparison: Language Number of characters: ??? Fully obfuscated function: (code here) Clear/semi-obfuscated function: (code here) Any notes on the algorithm/clever shortcuts it takes.

    Read the article

  • How to create a Uri instance parsed with GenericUriParserOptions.DontCompressPath

    - by Andrew Arnott
    When the .NET System.Uri class parses strings it performs some normalization on the input, such as lower-casing the scheme and hostname. It also trims trailing periods from each path segment. This latter feature is fatal to OpenID applications because some OpenIDs (like those issued from Yahoo) include base64 encoded path segments which may end with a period. How can I disable this period-trimming behavior of the Uri class? Registering my own scheme using UriParser.Register with a parser initialized with GenericUriParserOptions.DontCompressPath avoids the period trimming, and some other operations that are also undesirable for OpenID. But I cannot register a new parser for existing schemes like HTTP and HTTPS, which I must do for OpenIDs. Another approach I tried was registering my own new scheme, and programming the custom parser to change the scheme back to the standard HTTP(s) schemes as part of parsing: public class MyUriParser : GenericUriParser { private string actualScheme; public MyUriParser(string actualScheme) : base(GenericUriParserOptions.DontCompressPath) { this.actualScheme = actualScheme.ToLowerInvariant(); } protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { string result = base.GetComponents(uri, components, format); // Substitute our actual desired scheme in the string if it's in there. if ((components & UriComponents.Scheme) != 0) { string registeredScheme = base.GetComponents(uri, UriComponents.Scheme, format); result = this.actualScheme + result.Substring(registeredScheme.Length); } return result; } } class Program { static void Main(string[] args) { UriParser.Register(new MyUriParser("http"), "httpx", 80); UriParser.Register(new MyUriParser("https"), "httpsx", 443); Uri z = new Uri("httpsx://me.yahoo.com/b./c.#adf"); var req = (HttpWebRequest)WebRequest.Create(z); req.GetResponse(); } } This actually almost works. The Uri instance reports https instead of httpsx everywhere -- except the Uri.Scheme property itself. That's a problem when you pass this Uri instance to the HttpWebRequest to send a request to this address. Apparently it checks the Scheme property and doesn't recognize it as 'https' because it just sends plaintext to the 443 port instead of SSL. I'm happy for any solution that: Preserves trailing periods in path segments in Uri.Path Includes these periods in outgoing HTTP requests. Ideally works with under ASP.NET medium trust (but not absolutely necessary).

    Read the article

  • Using OpenID as the only authentication method

    - by iconiK
    I have read the other questions and they mostly talk about the security of doing so. That's not entirely my concern, mostly because the website is question is a browser-based game. However, the larger issue is the user - not every user is literate enough to understand OpenID. Sure RPX makes this pretty easy, which is what I'll use, but what if the user does not have an account at Google or Facebook or whatever, or does not trust the system to log in with an existing account? They'd have to get an account at another provide - I'm sure most will know how to do it, let alone be bothered to do it. There is also the problem of how to manage it in the application. A user might want to use multiple identities with a single account, so it's not as simple as username + password to deal with. How do I store the OpenID identities of a user in the database? Using OpenID gives me a benefit too: RPX can provide extensive profile information, so I can just prefill the profile form and ask the user to edit as required. I currently have this: UserID Email ------ --------------- 86000 [email protected] 86001 [email protected] UserOpenID OpenID ---------- ------ 86000 16733 86001 16839 86002 19361 OpenID Provider Identifier ------ -------- ---------------- 16733 Yahoo https:\\me.yahoo.com\bob#d36bd 16839 Yahoo https:\\me.yahoo.com\bigbobby#x75af 19361 Yahoo https:\\me.yahoo.com\alice#c19fd Is that the right way to store OpenID identifiers in the database? How would I match the identifier RPX gave me with one in the database to log in the user (if the identifier is known). So here are concrete questions: How would I make it accessible to users not having an OpenID or not wanting to use one? (security concerns over say, logging in with their Google account for example) How do I store the identifier in the database? (I'm not sure if the tables above are right) What measures do I need to take in order to prevent someone from logging in as another user and happily doing anything with their account? (as I understand RPX sends the identifier via HTTP, so what anyone would have to do is to just somehow grab it then enter it in the "OpenID" field) What else do I need to be aware of when using OpenID?

    Read the article

  • Visual Studio 2005 - Debugger stopped working.

    - by eric
    More fun and pain with Visual Studio. Visual Studio 2005. About two months ago, I started an assignment. In my role, I cannot install or configure development software. Trust me this has given me plenty of heartburn. No IIS is involved here, just File Sharing. That being said, when I first started I had a problem with my debugger not working. The debugger just stopped. I was able to get it working. Now the problem has returned and I am pulling every last hair on my head. Almost none of my symbols loads. It can't find the PDB files. In Debugger options, I checked the Symbol section. My symbol file location entry is completely blank. ? I don't know why. I did not touch this prior to the problem occurring. I have cleared the Temporary ASP.NET folders. Example: Here is my Module Output CppCodeProvider.dll C:\Windows\assembly\GAC_MSIL\CppCodeProvider\8.0.0.0__b03f5f7f11d50a3a\CppCodeProvider.dll No No Cannot find or open the PDB file. 17 8.0.50727.762 12/2/2006 4:23 AM 6A510000-6A52C000 [1844] WebDev.WebServer.EXE: Managed WebDev.WebHost.dll C:\Windows\assembly\GAC_32\WebDev.WebHost\8.0.0.0__b03f5f7f11d50a3a\WebDev.WebHost.dll No No Cannot find or open the PDB file. 3 8.0.50727.42 9/23/2005 4:20 AM 6D040000-6D050000 [1844] WebDev.WebServer.EXE: Managed So I enabled the SHFUSION.dll in my versio of the the Framework I am using... In my GAC, I can see this version of WebDev.WebHost.dll for example: ProcessArchitecture(x86) Public key token matches: b03f5f7f11d50a3a 8.0.50727.42 I then see some custom dlls. I should note, I created a new project. Recreated my files manually by importing them. The debugger worked 5 times and died. I'm at a loss of what to do next? The obvious has been checked: The project is set to Debug Configuration Manager Configuration Debug Platform .NET Build : checked. Web.Config: I have attempted to manually attach to the Webdev process from the Debug window and that doesn't work. I have googled this and this problem seems to occur quite a bit.

    Read the article

  • Is Java assert broken?

    - by BlairHippo
    While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning! But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.* This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely. And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all. All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!): What am I missing? Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();? I just feel like there's something important here that I'm not getting. *Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Using a function with reference as a function with pointers?

    - by epatel
    Today I stumbled over a piece of code that looked horrifying to me. The pieces was chattered in different files, I have tried write the gist of it in a simple test case below. The code base is routinely scanned with FlexeLint on a daily basis, but this construct has been laying in the code since 2004. The thing is that a function implemented with a parameter passing using references is called as a function with a parameter passing using pointers...due to a function cast. The construct has worked since 2004 on Irix and now when porting it actually do work on Linux/gcc too. My question now. Is this a construct one can trust? I can understand if compiler constructors implement the reference passing as it was a pointer, but is it reliable? Are there hidden risks? Should I change the fref(..) to use pointers and risk braking anything in the process? What to you think? #include <iostream> using namespace std; // ---------------------------------------- // This will be passed as a reference in fref(..) struct string_struct { char str[256]; }; // ---------------------------------------- // Using pointer here! void fptr(const char *str) { cout << "fptr: " << str << endl; } // ---------------------------------------- // Using reference here! void fref(string_struct &str) { cout << "fref: " << str.str << endl; } // ---------------------------------------- // Cast to f(const char*) and call with pointer void ftest(void (*fin)()) { void (*fcall)(const char*) = (void(*)(const char*))fin; fcall("Hello!"); } // ---------------------------------------- // Let's go for a test int main() { ftest((void (*)())fptr); // test with fptr that's using pointer ftest((void (*)())fref); // test with fref that's using reference return 0; }

    Read the article

  • Excel string manipulation to check data consistency

    - by chefsmart
    Background information: - There are nearly 7000 individuals and there is data about their performances in one, two or three tests. Every individual has taken the 1st test (let's call it Test M). Some of those who have taken Test M have also taken Test I, and some of those who have taken Test I have also taken Test B. For the first two tests (M and I), students can score grades I, II, or III. Depending on the grades they are awarded points -- 3 for grade I, 2 for II, 1 for III. The last Test B is just a pass or a fail result with no grades. Those passing this test get 1 point, with no points for failure. (Well actually, grades are awarded, but all grades are given a common 1 point). An amateur has entered data to represent these students and their grades in an Excel file. Problem is, this person has done the worst thing possible - he has developed his own notation and entered all test information in a single cell --- and made my life hell. The file originally had two text columns, one for individual's id, and the second for test info, if one could call it that. It's horrible, I know, and I am suffering. In the image, if you see "M-II-2 I-III-1" it means the person got grade II in Test M for 2 points and grade III in Test I for 1 point. Some have taken only one test, some two, and some three. When the file came to me for processing and analyzing the performance of students, I sent it back with instructions to insert 3 additional columns with only the grades for the three tests. The file now looks as follows. Columns C and D represent grades I, II, and III using 1,2 and 3 respectively. Column C is for Test M, column D for Test I. Column E says BA (B Achieved!) if the individual has passed Test B. Now that you have the above information, let's get to the problem. I don't trust this and want to check whether data in column B matches with data in columns C,D and E. That is, I want to examine the string in column B and find out whether the figures in columns C,D and E are correct. All help is really appreciated. P.S. - I had exported this to MySQL via ODBC and that is why you are seeing those NULLs. I tried doing this in MySQL too, and really will accept a MySQL or an Excel solution, I don't have a preference. Edit : - See file with sample data

    Read the article

  • How to define an extern, C struct returning function in C++ using MSVC?

    - by DK
    The following source file will not compile with the MSVC compiler (v15.00.30729.01): /* stest.c */ #ifdef __cplusplus extern "C" { #endif struct Test; extern struct Test make_Test(int x); struct Test { int x; }; extern struct Test make_Test(int x) { struct Test r; r.x = x; return r; } #ifdef __cplusplus } #endif Compiling with cl /c /Tpstest.c produces the following error: stest.c(8) : error C2526: 'make_Test' : C linkage function cannot return C++ class 'Test' stest.c(6) : see declaration of 'Test' Compiling without /Tp (which tells cl to treat the file as C++) works fine. The file also compiles fine in DigitalMars C and GCC (from mingw) in both C and C++ modes. I also used -ansi -pedantic -Wall with GCC and it had no complaints. For reasons I will go into below, we need to compile this file as C++ for MSVC (not for the others), but with functions being compiled as C. In essence, we want a normal C compiler... except for about six lines. Is there a switch or attribute or something I can add that will allow this to work? The code in question (though not the above; that's just a reduced example) is being produced by a code generator. As part of this, we need to be able to generate floating point nans and infinities as constants (long story), meaning we have to compile with MSVC in C++ mode in order to actually do this. We only found one solution that works, and it only works in C++ mode. We're wrapping the code in extern "C" {...} because we want to control the mangling and calling convention so that we can interface with existing C code. ... also because I trust C++ compilers about as far as I could throw a smallish department store. I also tried wrapping just the reinterpret_cast line in extern "C++" {...}, but of course that doesn't work. Pity. There is a potential solution I found which requires reordering the declarations such that the full struct definition comes before the function foward decl., but this is very inconvenient due to the way the codegen is performed, so I'd really like to avoid having to go down that road if I can.

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • How do I query delegation properties of an active directory user account?

    - by Mark J Miller
    I am writing a utility to audit the configuration of a WCF service. In order to properly pass credentials from the client, thru the WCF service back to the SQL back end the domain account used to run the service must be configured in Active Directory with the setting "Trust this user for delegation" (Properties - "Delegation" tab). Using C#, how do I access the settings on this tab in Active Directory. I've spent the last 5 hours trying to track this down on the web and can't seem to find it. Here's what I've done so far: using (Domain domain = Domain.GetCurrentDomain()) { Console.WriteLine(domain.Name); // get domain "dev" from MSSQLSERVER service account DirectoryEntry ouDn = new DirectoryEntry("LDAP://CN=Users,dc=dev,dc=mydomain,dc=lcl"); DirectorySearcher search = new DirectorySearcher(ouDn); // get sAMAccountName "dev.services" from MSSQLSERVER service account search.Filter = "(sAMAccountName=dev.services)"; search.PropertiesToLoad.Add("displayName"); search.PropertiesToLoad.Add("userAccountControl"); SearchResult result = search.FindOne(); if (result != null) { Console.WriteLine(result.Properties["displayName"][0]); DirectoryEntry entry = result.GetDirectoryEntry(); int userAccountControlFlags = (int)entry.Properties["userAccountControl"].Value; if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_FOR_DELEGATION) Console.WriteLine("TRUSTED_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) Console.WriteLine("TRUSTED_TO_AUTH_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.NOT_DELEGATED) == (int)UserAccountControl.NOT_DELEGATED) Console.WriteLine("NOT_DELEGATED"); foreach (PropertyValueCollection pvc in entry.Properties) { Console.WriteLine(pvc.PropertyName); for (int i = 0; i < pvc.Count; i++) { Console.WriteLine("\t{0}", pvc[i]); } } } } The "userAccountControl" does not seem to be the correct property. I think it is tied to the "Account Options" section on the "Account" tab, which is not what we're looking for but this is the closest I've gotten so far. The justification for all this is: We do not have permission to setup the service in QA or in Production, so along with our written instructions (which are notoriously only followed in partial) I am creating a tool that will audit the setup (WCF and SQL) to determine if the setup is correct. This will allow the person deploying the service to run this utility and verify everything is setup correctly - saving us hours of headaches and reducing downtime during deployment.

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • Help Desk Database Design

    - by user237244
    The company I work at has very specific and unique needs for a help desk system, so none of the open source systems will work for us. That being the case, I created a custom system using PHP and MySQL. It's far from perfect, but it's infinitely better than the last system they were using; trust me! It meets most of our needs quite nicely, but I have a question about the way I have the database set up. Here are the main tables: ClosedTickets ClosedTicketSolutions Locations OpenTickets OpenTicketSolutions Statuses Technicians When a user submits a help request, it goes in the "OpenTickets" table. As the technicians work on the problem, they submit entries with a description of what they've done. These entries go in the "OpenTicketSolutions" table. When the problem has been resolved, the last technician to work on the problem closes the ticket and it gets moved to the "ClosedTickets" table. All of the solution entries get moved to the "ClosedTicketSolutions" table as well. The other tables (Locations, Statuses, and Technicians) exist as a means of normalization (each location, status, and technician has an ID which is referenced). The problem I'm having now is this: When I want to view a list of all the open tickets, the SQL statement is somewhat complicated because I have to left join the "Locations", "Statuses", and "Technicians" tables. Fields from various tables need to be searchable as well. Check out how complicated the SQL statement is to search closed tickets for tickets submitted by anybody with a first name containing "John": SELECT ClosedTickets.*, date_format(ClosedTickets.EntryDate, '%c/%e/%y %l:%i %p') AS Formatted_Date, date_format(ClosedDate, '%c/%e/%y %l:%i %p') AS Formatted_ClosedDate, Concat(Technicians.LastName, ', ', Technicians.FirstName) AS TechFullName, Locations.LocationName, date_format(ClosedTicketSolutions.EntryDate, '%c/%e/%y') AS Formatted_Solution_EntryDate, ClosedTicketSolutions.HoursSpent AS SolutionHoursSpent, ClosedTicketSolutions.Tech_ID AS SolutionTech_ID, ClosedTicketSolutions.EntryText FROM ClosedTickets LEFT JOIN Technicians ON ClosedTickets.Tech_ID = Technicians.Tech_ID LEFT JOIN Locations ON ClosedTickets.Location_ID = Locations.Location_ID LEFT JOIN ClosedTicketSolutions ON ClosedTickets.TicketNum = ClosedTicketSolutions.TicketNum WHERE (ClosedTickets.FirstName LIKE '%John%') ORDER BY ClosedDate Desc, ClosedTicketSolutions.EntryDate, ClosedTicketSolutions.Entry_ID One thing that I'm not able to do right now is search both open and closed tickets at the same time. I don't think a union would work in my case. So I'm wondering if I should store the open and closed tickets in the same table and just have a field indicating whether or not the ticket is closed. The only problem I can forsee is that we have so many closed tickets already (nearly 30,000) so the whole system might perform slowly. Would it be a bad idea to combine the open and closed tickets?

    Read the article

  • SCOM 2012 DNS Forwarder Availability Monitor

    - by Massimo
    Background: I have an environment with two different AD domains, each in its own forest, each with two Windows Server 2008 R2 domain controllers acting as DNS servers. There is no trust between the domains. Each DNS server manages the main DNS zone for its AD domain, and then some other zones, including the reverse lookup zone for its IP subnets; all zones are AD-integrated; all DNS servers which manages a zone are correctly listed as authoritative name servers for that zone. So, the situation is like this (using fake names and IP addresses): Domain A: DNS domain: a.dom IP subnet: 192.168.1.X DC/DNS Servers: serverA1.a.dom (192.168.1.1) and serverA2.a.dom (192.168.1.2) Authoritative zones: a.dom, 1.168.192.in-addr.arpa, somezone.local Domain B: DNS domain: b.dom IP subnet: 10.0.0.X DC/DNS Servers: serverB1.b.dom (10.0.0.1) and serverB2.b.dom (10.0.0.2) Authoritative zones: b.dom, 0.0.10.in-addr.arpa, someotherzone.local DNS servers in domain A have conditional forwarders defined for each zone managed by DNS servers in domain B, forwarding to both domain B's DNS servers; DNS servers in domain B have the opposite configuration. All forwarders are stored in Active Directory. All is working perfectly, and computers in each domain can resolve forward and reverse DNS queries for both domains, using their domain's DNS servers. The problem: I have SCOM 2012 deployed in domain A, with the SCOM agent installed on both DCs; the management packs for Active Directory and DNS Server are installed and up-to-date. I have a series of alerts like the following ones on both domain controllers; each alert is generated for each forwarded zone and for each forwarded server: Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom The only exception is the main AD DNS zone managed by domain B's DNS servers (b.dom): for that conditional forwarder, no alert is generated and the forwarder availability monitor is green. Ok, what does this mean? What are those monitors trying to tell me? What are they checking? What's actually wrong? And why there is no error for the "b.dom" zone, which is configured in the exact same way as the other ones, both as a zone in domain B's DNS servers and as a forwarder in domain A's DNS servers?

    Read the article

  • VS 2010 SP1 (Beta) and IIS Express

    - by ScottGu
    Last month we released the VS 2010 Service Pack 1 (SP1) Beta.  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.  You can download and install the VS 2010 SP1 Beta here. IIS Express Earlier this summer I blogged about IIS Express.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  We think it combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into VS today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios. Visual Studio 2010 SP1 adds support for IIS Express – and you can start to take advantage of this starting with last month’s VS 2010 SP1 Beta release. Downloading and Installing IIS Express IIS Express isn’t included as part of the VS 2010 SP1 Beta.  Instead it is a separate ~4MB download which you can download and install using this link (it uses WebPI to install it).  Once IIS Express is installed, VS 2010 SP1 will enable some additional IIS Express commands and dialog options that allow you to easily use it. Enabling IIS Express for Existing Projects Visual Studio today defaults to using the built-in ASP.NET Development Server (aka Cassini) when running ASP.NET Projects: Converting your existing projects to use IIS Express is really easy.  You can do this by opening up the project properties dialog of an existing project, and then by clicking the “web” tab within it and selecting the “Use IIS Express” checkbox. Or even simpler, just right-click on your existing project, and select the “Use IIS Express…” menu command: And now when you run or debug your project you’ll see that IIS Express now starts up and runs automatically as your web-server: You can optionally right-click on the IIS Express icon within your system tray to see/browse all of sites and applications running on it: Note that if you ever want to revert back to using the ASP.NET Development Server you can do this by right-clicking the project again and then select the “Use Visual Studio Development Server” option (or go into the project properties, click the web tab, and uncheck IIS Express).  This will revert back to the ASP.NET Development Server the next time you run the project. IIS Express Properties Visual Studio 2010 SP1 exposes several new IIS Express configuration options that you couldn’t previously set with the ASP.NET Development Server.  Some of these are exposed via the property grid of your project (select the project node in the solution explorer and then change them via the property window): For example, enabling something like SSL support (which is not possible with the ASP.NET Development Server) can now be done simply by changing the “SSL Enabled” property to “True”: Once this is done IIS Express will expose both an HTTP and HTTPS endpoint for the project that we can use: SSL Self Signed Certs IIS Express ships with a self-signed SSL cert that it installs as part of setup – which removes the need for you to install your own certificate to use SSL during development.  Once you change the above drop-down to enable SSL, you’ll be able to browse to your site with the appropriate https:// URL prefix and it will connect via SSL. One caveat with self-signed certificates, though, is that browsers (like IE) will go out of their way to warn you that they aren’t to be trusted: You can mark the certificate as trusted to avoid seeing dialogs like this – or just keep the certificate un-trusted and press the “continue” button when the browser warns you not to trust your local web server. Additional IIS Settings IIS Express uses its own per-user ApplicationHost.config file to configure default server behavior.  Because it is per-user, it can be configured by developers who do not have admin credentials – unlike the full IIS.  You can customize all IIS features and settings via it if you want ultimate server customization (for example: to use your own certificates for SSL instead of self-signed ones). We recommend storing all app specific settings for IIS and ASP.NET within the web.config file which is part of your project – since that makes deploying apps easier (since the settings can be copied with the application content).  IIS (since IIS 7) no longer uses the metabase, and instead uses the same web.config configuration files that ASP.NET has always supported – which makes xcopy/ftp based deployment much easier. Making IIS Express your Default Web Server Above we looked at how we can convert existing sites that use the ASP.NET Developer Web Server to instead use IIS Express.  You can configure Visual Studio to use IIS Express as the default web server for all new projects by clicking the Tools->Options menu  command and opening up the Projects and Solutions->Web Projects node with the Options dialog: Clicking the “Use IIS Express for new file-based web site and projects” checkbox will cause Visual Studio to use it for all new web site and projects. Summary We think IIS Express makes it even easier to build, run and test web applications.  It works with all versions of ASP.NET and supports all ASP.NET application types (including obviously both ASP.NET Web Forms and ASP.NET MVC applications).  Because IIS Express is based on the IIS 7.5 codebase, you have a full web-server feature-set that you can use.  This means you can build and run your applications just like they’ll work on a real production web-server.  In addition to supporting ASP.NET, IIS Express also supports Classic ASP and other file-types and extensions supported by IIS – which also makes it ideal for sites that combine a variety of different technologies. Best of all – you do not need to change any code to take advantage of it.  As you can see above, updating existing Visual Studio web projects to use it is trivial.  You can begin to take advantage of IIS Express today using the VS 2010 SP1 Beta. Hope this helps, Scott

    Read the article

  • SQL SERVER – Developer Training Resources and Summary Roundup

    - by pinaldave
    It is always pleasure for any author when other renowned authors in the industry write about you. Earlier I wrote a five part blog series on Developer Training and I have received a phenomenal response to the series. I have received plenty of comments, questions and feedback. I thought it would be nice to sum up the whole series as well answer a few of the questions received. Quick Recap Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. There are companies that pay for 100% of the expenses and there are employees who opt for training on their own expense during their personal time. Training is often looked at as vacation by employee and employers and we need to change this mind-set. One of the ways is to report back the learning to your manager and implement newly learned knowledge in day-to-day work. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. The manager often feels pressure to accommodate every single employee for training even though his training budget is limited. It is indeed the responsibility of the developer to get maximum advantage from the training. Training immediately helps organizations but stays as a part of an employee’s knowledge forever. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. The common element between both the methods is “learning material”. Learning material can be of any format – videos, books, paper notes or just a plain black board. Instructor-led training is a very effective mode but not possible every single time. During the course of the developer’s career, one has to learn lots of new technology and it is almost impossible to have a quality trainer available on that subject at that time. Books are most effective and proven methods, however, it always helps if someone explains the concepts of the book with a demonstration. In recent times I have started to believe in online trainings which leads to a hybrid experience. Online trainings take the best part of the books and the best part of the instructor-led training and gives effective training in a matter of hours. Developer Training – A Conclusive Summary- Part 5 In this part, I shared what I was continuously thinking about developer training. There is no better teacher than oneself. There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. Once someone overcomes them, life is much easier. I believe that proper and appropriate high quality training can help to address the burning issues. Opinion of Friends I invited a few of my friends to express their opinion about developer training and here are their opinions. I am listing them here in the order of the blog post publishing date. Nakul Vachhrajani - Developer Trainings-Importance, Benefits, Tips and follow-up Nakul’s sums of many of the concepts which are complementary to my blog posts. Nakul addresses the burning question of developer training with different angles. I am personally very impressed by his following statement - “Being skilled does not mean having just a stack of certifications, but it also means having an understanding about the internals of the products that you are working on – and using that knowledge to improve the efficiency & productivity at the workplace in turn resulting in better products, better consulting abilities and a happier self.” Nakul also suggests the online training options of Pluralsight. Vinod Kumar - Training–a necessity or bonus Vinod Kumar comes up with excellent follow up on developer training. Vinod is known for his inspirational writing about SQL Server. Vinod starts with a story of a student who is extremely eager to learn the wisdom of life from a monk but the monk does not accept him as a disciple for a long time. The conversation between student and monk is indeed an essence of all learning. We all want to learn quickly and be successful but the most important thing in life is to have the right attitude towards learning and more so towards life. The blog post end with a very important thought about how to avoid the famous excuse – “I don’t have enough time.” Ritesh Shah - Training – useful or useless? Ritesh brings up very important concept related to training. Ritesh in his meticulous style explains why training is an important and lifelong process. Training must not stop at any age but should continue forever. The moment training stops, progress stops along with. Paras Doshi - Professional Development Resource Paras is known for his to–the-point writing, and has summarized the five part series very precisely. He read the five part series and created a digest summary of the blog post. If you are in a rush and have no time to read my five series – I suggest you read his blog post. Training Resources I am often asked what the best resources for learning new technology are. This is the most difficult question EVER. There are plenty of good training resources available. When it is about training our needs are different, our preference of learning is different and we all have an opinion. Additionally, we all are located in different geographic locations worldwide and there is no way one solution will fit all. However, let me list a few of the training resources which I have built so far and you can consume them if you find it relevant to your need. SQL Server Books SQL Server Interview Questions and Answers SQL Wait Stats SQL Programming Joes 2 Pros SQL Server Video Tutorials SQL Server Questions and Answers SQL Server Performance: Indexing Basics SQL Server Performance: Introduction to Query Tuning SQL in Sixty Seconds Series of Sixty Seconds Learning Video on YouTube Trust me worldwide web is very big and there are plenty of high quality learning materials available worldwide – trainer-led as well online. I suggest you explore various options and make the best choice for yourself. Remember, training is your personal journey and it should never stop. Are you ready? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85  | Next Page >