Search Results

Search found 360 results on 15 pages for 'addressing'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • WCF tcp.net client/server connection failing "Stream Security is required"

    - by Tom W.
    I am trying to test a simple WCF tcp.net client/server app. The WCF service is being hosted on Windows 7 IIS. I have enabled TCP.net in IIS. I granted liberal security privileges to service app by configuring an app pool with admin rights and set the IIS service application to run in the context. I enabled tracing on the service app to troubleshoot. Whenever I run a simple method call against the service from the WCF client app, I get the following exception: "Stream Security is required at http://www.w3.org/2005/08/addressing/anonymous, but no security context was negotiated. This is likely caused by the remote endpoint missing a StreamSecurityBindingElement from its binding." Here is my client configuration: <bindings> <netTcpBinding> <binding name="InsecureTcp"> <security mode="None" /> </binding> </netTcpBinding> </bindings> Here is my service configuration: <bindings> <netTcpBinding> <binding name="InsecureTcp" > <security mode="None" /> </binding> </netTcpBinding> </bindings> <services> <service name="OrderService" behaviorConfiguration="debugServiceBehavior"> <endpoint address="" binding="netTcpBinding" bindingConfiguration="InsecureTcp" contract="ProtoBufWcfService.IOrder" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="debugServiceBehavior"> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors>

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • WPF DataGrid ComboBox Column: Propagating Header Combobox throughout column...

    - by LostKaleb
    Hey there! Here's my question: I've got a Datagrid in WPF and I have a first column that is a DataGridComboBoxColumn. What I'd like to do is to have a header for that column also with a combobox: altering the header with propagate throughout the column. I can get this done visually, but when I submit the data, the list that is bound with the Datagrid does not carry the new combobox values. <dg:DataGridComboBoxColumn SelectedItemBinding="{Binding BIBStatus}" ItemsSource="{Binding Source={StaticResource typeStatus}}" Width="0.20*"> <dg:DataGridComboBoxColumn.HeaderTemplate> <DataTemplate> <StackPanel> <TextBlock Text="Built-In Bridge"/> <ComboBox SelectedItem="{Binding BIBStatus}" SelectionChanged="ComboBox_SelectionChanged" ItemsSource="{Binding Source={StaticResource typeStatus}}"/> </StackPanel> </DataTemplate> </dg:DataGridComboBoxColumn.HeaderTemplate> </dg:DataGridComboBoxColumn> In the ComboBox_SelectionChanged I have the following code: DataGridColumn comboCol = this.gridResults.Columns[0]; for (int i = 0; i < this.gridResults.Items.Count; i++) { ComboBox myCmBox = (comboCol.GetCellContent(this.gridResults.Items[i]) as ComboBox); myCmBox.SelectedValue = ((ComboBox)sender).SelectedValue; } When I submit the data, I submit the list that is DataContext to the Datagrid; if I change the value of the first column addressing a row at a time, i.e. clicking the cell with the combobox in each row, the values are propagated to the list in DataContext, however if I change the value of the first column by the header it does not. Can anyone tell me what I'm missing? I'm guessing it's the way I affect each row, but I've tried SelectedValue, SelectedItem and SelectedIndex... all to no avail. Thanks in advance...

    Read the article

  • Common vulnerabilities for WinForms applications

    - by David Stratton
    I'm not sure if this is on-topic or not here, but it's so specific to .NET WinForms that I believe it makes more sense here than at the Security stackexchange site. (Also, it's related strictly to secure coding, and I think it's as on-topic as any question asking about common website vulnerabiitles that I see all over the site.) For years, our team has been doing threat modeling on Website projects. Part of our template includes the OWASP Top 10 plus other well-known vulnerabilities, so that when we're doing threat modeling, we always make sure that we have a documented process to addressing each of those common vulnerabilities. Example: SQL Injection (Owasp A-1) Standard Practice Use Stored Parameterized Procedures where feasible for access to data where possible Use Parameterized Queries if Stored Procedures are not feasible. (Using a 3rd party DB that we can't modify) Escape single quotes only when the above options are not feasible Database permissions must be designed with least-privilege principle By default, users/groups have no access While developing, document the access needed to each object (Table/View/Stored Procedure) and the business need for access. [snip] At any rate, we used the OWASP Top 10 as the starting point for commonly known vulnerabilities specific to websites. (Finally to the question) On rare occasions, we develop WinForms or Windows Service applications when a web app doesn't meet the needs. I'm wondering if there is an equivalent list of commonly known security vulnerabilities for WinForms apps. Off the top of my head, I can think of a few.... SQL Injection is still a concern Buffer Overflow is normally prevented by the CLR, but is more possible if using non-managed code mixed in with managed code .NET code can be decompiled, so storing sensitive info in code, as opposed to encrypted in the app.config... Is there such a list, or even several versions of such a list, from which we can borrow to create our own? If so, where can I find it? I haven't been able to find it, but if there is one, it would be a great help to us, and also other WinForms developers.

    Read the article

  • Error creating Rails DB using rake db:create

    - by Simon
    Hi- I'm attempting to get my first "hello world" rails example going using the rails' getting started guide on my OSX 10.6.3 box. When I go to execute the first rake db:create command (I'm using mysql) I get: simon@/Users/simon/source/rails/blog/config: rake db:create (in /Users/simon/source/rails/blog) Couldn't create database for {"reconnect"=>false, "encoding"=>"utf8", "username"=>"root", "adapter"=>"mysql", "database"=>"blog_development", "pool"=>5, "password"=>nil, "socket"=>"/opt/local/var/run/mysql5/mysqld.sock"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) I found plenty of stackoverflow questions addressing this problem with the following advice: Verify that user and password are correct (I'm running w/ no password for root on my dev box) Verify that the socket is correct - I can cat the socket, so I assume it's correct Verify that the user can create a DB (As you can see root can connect and create a this DB no problem) simon@/Users/simon/source/rails/blog/config: mysql -uroot -hlocalhost Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 16 Server version: 5.1.45 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql create database blog_development; Query OK, 1 row affected (0.00 sec) Any idea on what might be going on here?

    Read the article

  • problem with addressfilter in WCF

    - by Zé Carlos
    I've build my own WCF channel with all necessary stuff (like encoders, bindings, etc) to use it with ServiceHost. I just want to build the "channel stack" making no custumizations at "Service Model". To acomplish this, my encoder returns perfect ServiceModel.Messages with a XML infoset just like other channel does. Lets assume the following service implementation: [ServiceContract(Namespace = "http://MyNS")] public interface IService1 { [OperationContract(IsOneWay = true)] void dummy(); } public class Service1 : IService1 { public void dummy() { Console.WriteLine("In Service1:dummy()"); } } I used this service through other bindings and traced the following ServiceModel.Message contents (SOAP format): <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"> <s:Header> <a:Action s:mustUnderstand="1">http://MyNS/IService1/dummy</a:Action> <a:To s:mustUnderstand="1">amqp://localhost</a:To> </s:Header> <s:Body> <dummy xmlns="http://MyNS"></dummy> </s:Body> </s:Envelope> Then (just to debug) i changed my encoder to allways return this message. When i use my custom channel the WCF's runtime replay with an faul message telling: "The message with To '' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree." I read that the default EndPointDispatcher.AddressFilter simply looks to the "TO" header and delivery the message to corresponding service. This is happening with other bindings, why not happens with my custom channel too? Is there any way to i check what default AddressFilter is doing? Thanks

    Read the article

  • WCF Message Debugging on WebHttpBehavior

    - by Programming Hero
    I've created a custom binding in WCF for a custom MessageEncoder to allow messages to be written as XML using a wider range of encodings than WCF supports out of the box. The encoder appears to be working and I am able to send and receive messages, but I want to verify that the XML message being written is exactly as required by the service I am trying to consume. I've turned on message logging for WCF using the diagnostic trace listeners to output the messages sent and received over the wire to a log file. Unfortunately, for calls using my encoder, the message is displayed as ... stream ... EDIT: I don't think it's anything to do with my custom encoding. I have experimented with my custom binding a little, switching to using the built-in text encoding and http transport. I still don't get a message body logged in the message trace. EDIT2: Having done further investigation, the issue looks to be related not to the custom binding, but the custom behaviour. I'm apply the <webHttp/> behaviour. Once this is specified (along with manual addressing), the tracing behaviour shows up. Is this a known issue with WebHttpBehavior? Am I still barking up the wrong tree?

    Read the article

  • Universal Authentication to Google Data API?

    - by viatropos
    Hey, I want to be able to have say 10 admin users store all their documents on google docs for a domain ('http://docs.google.com/a/domain.com'), and have everyone else be able to view them through 'domain.com/documents'. I'm just not certain how the whole authentication thing works in that case. Should I use OAuth? Or could I just use ClientLogin for say the root/global admin, and anytime someone goes to the site, they login as that? That works for personal docs, but it doesn't seem to be working for Google Apps. I would like it so the user has no idea they're accessing google docs, so I don't want them to have to say "Yes, Authenticate this App with Google", as seen in this Doclist Manager App. The app is basically: Admin stores a bunch of forms and documents User uses form and views documents the admin has posted ... so there's no need to access the user's Google Docs. But it seems like AuthSub and OAuth are addressing that instead... Thanks for the tips.

    Read the article

  • Error generating Interface with svcutil.exe

    - by capdragon
    Can someone help me generate the Interface (c#) file for the following OGC schema? Schema files: Download Schema Files Link I need to create web services for the Ordering wsdl in the schema zip file above. I've been at it for days now with no luck generating the interface. I've tried: svcutil.exe thewsdl.wsdl /language:c# /out:ITheInterface.cs svcutil Order.wsdl /out:IOrder.cs svcutil Order.wsdl Order.xsd ..\ws-addressing\ws-addr.xsd /out:IOrder.cs svcutil Order.wsdl Order.xsd ws-addr.xsd /out:IOrder.cs and i get the following error: Microsoft (R) Service Model Metadata Tool [Microsoft (R) Windows (R) Communication Foundation, Version 4.0.30319.1] Copyright (c) Microsoft Corporation. All rights reserved. Error: Cannot read ws-addr.xsd. Cannot load file D:\Documents\DEV\SARPilot\Docs\eoschema\schema\OrderSchema\ws-addr.xsd as an Assembly. Check the FusionLogs f or more Information. Could not load file or assembly 'file:///D:\Documents\DEV\SARPilot\Docs\eoschema\schema\OrderSchema\ws-addr.xsd' or one of its dependencies. The module was expected to contain an assembly manifest.

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • Capture *all* display-characters in JavaScript?

    - by Jean-Charles
    I was given an unusual request recently that I'm having the most difficult time addressing that involves capturing all display-characters when typed into a text box. The set up is as follows: I have a text box that has a maxlength of 10 characters. When the user attempts to type more than 10 characters, I need to notify the user that they're typing beyond the character count limit. The simplest solution would be to specify a maxlength of 11, test the length on every keyup, and truncate back down to 10 characters but this solution seems a bit kludgy. What I'd prefer to do is capture the character before keyup and, depending on whether or not it is a display-character, present the notification to the user and prevent the default action. A white-list would be challenging since we handle a lot of international data. I've played around with every combination of keydown, keypress, and keyup, reading event.keyCode, event.charCode, and event.which, but I can't find a single combination that works across all browsers. The best I could manage is the following that works properly in =IE6, Chrome5, FF3.6, but fails in Opera: NOTE: The following code utilizes jQuery. $(function(){ $('#textbox').keypress(function(e){ var $this = $(this); var key = ('undefined'==typeof e.which?e.keyCode:e.which); if ($this.val().length==($this.attr('maxlength')||10)) { switch(key){ case 13: //return case 9: //tab case 27: //escape case 8: //backspace case 0: //other non-alphanumeric break; default: alert('no - '+e.charCode+' - '+e.which+' - '+e.keyCode); return false; }; } }); }); I'll grant that what I'm doing is likely over-engineering the solution but now that I'm invested in it, I'd like to know of a solution. Thanks for your help!

    Read the article

  • Advice on improving programming skills, learning capabilities?

    - by anonymous-coward1234
    Hi all, After 2,5 years of professional Java programing, I still have problems that make my job difficult and, more importantly - more times that I would like to admit - not enjoyable. I would like to ask for advice by more experienced people on ways that would help me overcome them. These are the problems I have: I do not absorb new knowledge easily. Even when I understand something, after a couple of days I easily forget even basic stuff. Other co-workers, even with the same working experience, when reading new technologies put things easily into "context", and are able to compare in "real time| similar technologies they already have used. I always try to address all the issues to whatever I am doing at one go, which results in me trying to resolve too many problems at the same time, losing completely control. I find it difficult to make my mind on a single problem that I should address first, and even when I do, and find myself throwing away code that I wrote because I started addressing the wrong issue first. As far as architecture and data modeling is concerned, I have difficulty making decisions on what objects must be created, with what hierarchy, interfaces, abstraction etc. I imagine that - to a certain degree - these things come with experience. But after 2,5 years of Java programming, I would expect myself to have come much farther that I have come, both in terms of absorption and experience. Is there a way to improve my learning speed? Any books, methods, advice is welcome.

    Read the article

  • Why Android for enterprise applications?

    - by mcabral
    Recently one of our clients is considering the posibility of picking up an old WinMobile 5.0 project. Several features are to be added to the point it will be a major version update. The client is worried about the mobile market, and thinks there's a chance all the effort put in this development will have to be thrown away in a couple of year due to the dinamics of the mobile market and the deprecation of mobile devices. So, the client is not sure whether he should continue with Windows Mobile (changing from WM 5.0 to 6.X) or starting from scratch with another technology. From our part we have been studing the mobile market, looking for clues for which will be the winning horse. The safe move seems to continue with WM just because re writing an entire application from scratch involves more risks and delays. On the other hand WM seems to be losing market and the ghost of an exit on their part is growing stronger everyday. But what can be say about Android? Everyone is talking about it and is growing at full speed but what avantagies will it bring to the table? Why should we start a fresh applicaction on this technology? So the question remains the same.. is Andriod mature enough for an enterprise application? Will you recomend it to one of your clients? Will you port/rewrite a WM application to Andriod? What's the trade-off? EDIT: Addressing commentaries. The app is entirely built with C# and Compact Framework. The app is for logistics/management.

    Read the article

  • A standard set of questions to ask an interviewer?

    - by Rob Wells
    We have had many questions for interviewers to ask interviewees. But none addressing information flow in the other direction, interviewee to interviewer. Just an indirect question about "deal breakers" and one about "finding dream jobs". What I'm after is when you're interviewing at a company do you have a set of questions that you like to ask to help get a feel for the company and the work environment? I have a series of questions that I like to ask that range from the development environment to testing techniques to how the team get on together. Anything else you'd like to ask? Edit: I moved my original list of interviewer questions to my answer below. I've also gone through the other answers and added the ones thought were useful in to that answer. The answer is community wiki so feel free to add anything useful. N.B. This is my first cut of categories. Feel free to modify/add/etc. the categories. Or to recategorise the questions themselves.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • How to perform Rails model validation checks within model but outside of filters using ledermann-rails-settings and extensions

    - by user1277160
    Background I'm using ledermann-rails-settings (https://github.com/ledermann/rails-settings) on a Rails 2/3 project to extend virtually the model with certain attributes that don't necessarily need to be placed into the DB in a wide table and it's working out swimmingly for our needs. An additional reason I chose this Gem is because of the post How to create a form for the rails-settings plugin which ties ledermann-rails-settings more closely to the model for the purpose of clean form_for usage for administrator GUI support. It's a perfect solution for addressing form_for support although... Something that I'm running into now though is properly validating the dynamic getters/setters before being passed to the ledermann-rails-settings module. At the moment they are saved immediately, regardless if the model validation has actually fired - I can see through script/console that validation errors are being raised. Example For instance I would like to validate that the attribute :foo is within the range of 0..100 for decimal usage (or even a regex). I've found that with the previous post that I can use standard Rails validators (surprise, surprise) but I want to halt on actually saving any values until those are addressed - ensure that the user of the GUI has given 61.43 as a numerical value. The following code has been borrowed from the quoted post. class User < ActiveRecord::Base has_settings validates_inclusion_of :foo, :in => 0..100 def self.settings_attr_accessor(*args) >>SOME SORT OF UNLESS MODEL.VALID? CHECK HERE args.each do |method_name| eval " def #{method_name} self.settings.send(:#{method_name}) end def #{method_name}=(value) self.settings.send(:#{method_name}=, value) end " end >>END UNLESS end settings_attr_accessor :foo end Anyone have any thoughts here on pulling the state of the model at this point outside of having to put this into a before filter? The goal here is to be able to use the standard validations and avoid rolling custom validation checks for each new settings_attr_accessor that is added. Thanks!

    Read the article

  • php error reporting - having trouble matching local & web server settings

    - by Andrew Heath
    I'm trying to add a custom error handler to my site, but in doing so have discovered that my webhost's PHP error reporting settings and those of my localhost (default XAMPP) vary considerably. While I thought I was programming to E_STRICT like a good little boy, adding the error handler to my webhost revealed craploads of Runtime Notices. Example: Runtime notice strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead In /home/... Clearly this isn't a red-alert, showstopping error. But what bothers me is that it doesn't show up on my localhost. I'd certainly like to improve my code by addressing these sorts of issues if I could see them! I've looked through both php.ini files, and my webhost's setting is error_reporting = E_ALL & ~E_NOTICE whereas mine was error_reporting = E_STRICT, which I had thought was better. However, changing mine to match and rebooting the server doesn't seem to have accomplished anything. Could someone please point me in the right direction?

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >