Search Results

Search found 72651 results on 2907 pages for 'application end'.

Page 55/2907 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Netretail's online retail operation benefits from personal contact

    - by christopher.jones
    Hot on oracle.com is a snapshot of Netretail Holding B.V. profiling their use of PHP and Oracle technology such as Oracle RAC cluster database to become a leading online retailer across Central and Eastern Europe. We've also just refreshed our key PHP Scalability and High Availability whitepaper which talks about connection pooling (DRCP) and Fast Application Notification (FAN). We brought it up to date for 11gR2 and PHP 5.3. It now includes the new 11gR2 V$CPOOL_CONN_INFO view, the new columns for DBA_CPOOL_INFO, information about LOGOFF triggers, and information about the support for Client Result Caching with DRCP. Back to Netretail. Two of their secrets to success are keeping technically up to date, and networking. That is, networking in the business sense. I had the pleasure of meeting Michal Táborský (@whizz), the Chief System Architect, when he was in California for a Velocity conference. Michal took time to visit Oracle HQ and talk with our developers about his then current architecture and future needs. I also met his manager at last year's Oracle OpenWorld conference. Having built up a relationship with us, Netretail now has access to Oracle Development staff. While this will never bypass Oracle Support (which have tools, systems etc that are needed and useful for resolving issues), it makes communication easier for some classes of questions. It helps discussions that will let us improve Oracle products, and make Netretail stronger. I like this. And there's no reason why you can't talk with us too. You know where to email me.

    Read the article

  • Why do I have a gnomekeyring.IOError when doing "quickly share"?

    - by Agmenor
    When I want to push my app to Launchpad by doing quickly share --verbose, I get the following Gnome Keyring error: Get Launchpad Settings Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 101, in <module> launchpad = launchpadaccess.initialize_lpi() File "/usr/lib/python2.7/dist-packages/quickly/launchpadaccess.py", line 91, in initialize_lpi allow_access_levels=["WRITE_PRIVATE"]) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 539, in login_with credential_save_failed, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 342, in _authorize_token_and_login authorization_engine.unique_consumer_id) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 282, in load return self.do_load(unique_key) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 336, in do_load 'launchpadlib', unique_key) File "/usr/lib/python2.7/dist-packages/keyring/core.py", line 34, in get_password return _keyring_backend.get_password(service_name, username) File "/usr/lib/python2.7/dist-packages/keyring/backend.py", line 154, in get_password items = gnomekeyring.find_network_password_sync(username, service) gnomekeyring.IOError ERROR: share command failed Aborting This used to work, so this means that I already have SSH and GPG configured. This is probably part of the explanation: I have this error when I am connected to this machine through a ssh tunnel with X forwarding. But I don't have it when I have physical access to the computer. Could you please give me some indications on what to do?

    Read the article

  • Building Single Page Apps on the Microsoft Stack

    - by Stephen.Walther
    Thank you everyone who came to my talk last night on Building Single Page Apps on the Microsoft Stack. I’ve attached the slides and code samples below. Here’s a quick summary of the talk. I argued that Single Page Apps are better than traditional Server Side Apps because: Single Page Apps are Stateful – In a traditional server-side app, whenever you navigate to a new page, all of your previous state is lost. It is like rebooting your computer whenever you perform any action In a Single Page App, Your Presentation Layer is Not Miles Away – In a traditional server-side app, because everything happens on the server, your presentation layer is separated from the user by space and time. In a Single Page App, the presentation layer is in the browser and not the server (which is the right place for a presentation layer). A Single Page App Respects the Web – It is easier to take advantage of HTML5 and related standards when building a Single Page App. Next, I recommended using the following four technologies when building a web application: Knockout – This is how you create your presentation layer. ASP.NET Web API – This is how you expose JSON data from your web server and perform server-side validation. HTML5 – This is how you implement client-side validation. Sammy – This is how you implement client-side routing and create a Single Page App with multiple virtual pages. There are code samples in the download (look in the Samples folder) which demonstrate how all of these technologies work when building Single Page Apps. Powerpoint Sample Code

    Read the article

  • Is QtQuick.Controls available on Ubuntu 13.10

    - by javascript is future
    I was looking to do UI development in QML, and I really want it to look native. I found the QtQuick.Controls (http://qt-project.org/doc/qt-5.1/qtquickcontrols/qtquickcontrols-index.html), but when I try make a simple application, it tells me that QtQuick.Controls isn't installed. main.qml: import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { height: 200 width: 200 } terminal: $ qmlscene main.qml file:///tmp/main.qml:2 module "QtQuick.Controls" is not installed Also, I downloaded the source from https://qt.gitorious.org/qt/qtquickcontrols/source/stable, ran qmake && make, but this returned the following output: cd src/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/src.pro -o Makefile ) && make -f Makefile make[1]: Går til katalog '/tmp/qtquickcontrols/src' cd controls/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/controls/controls.pro -o Makefile ) && make -f Makefile make[2]: Går til katalog '/tmp/qtquickcontrols/src/controls' g++ -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -O2 -fvisibility=hidden -fvisibility-inlines-hidden -std=c++0x -fno-exceptions -Wall -W -D_REENTRANT -fPIC -DQT_NO_XKB -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_PLUGIN -DQT_QUICK_LIB -DQT_QML_LIB -DQT_WIDGETS_LIB -DQT_NETWORK_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/include/qt5 -I/usr/include/qt5/QtQuick -I/usr/include/qt5/QtQml -I/usr/include/qt5/QtWidgets -I/usr/include/qt5/QtNetwork -I/usr/include/qt5/QtGui -I/usr/include/qt5/QtGui/5.1.1 -I/usr/include/qt5/QtGui/5.1.1/QtGui -I/usr/include/qt5/QtCore -I/usr/include/qt5/QtCore/5.1.1 -I/usr/include/qt5/QtCore/5.1.1/QtCore -I.moc/release-shared -o .obj/release-shared/qquickaction.o qquickaction.cpp qquickaction.cpp:49:39: fatal error: private/qguiapplication_p.h: No such file or directory #include <private/qguiapplication_p.h> ^ Is there some PPA I could use, or do I have to wait for Trusty to get out, before I can use native controls from Qt? Regards

    Read the article

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Using Quickly for text-heavy app

    - by Kevin
    I am trying to create a small app that displays documentation. When it is run, the application window will display a main menu with buttons labeled 'Document 1', 'Document 2', etc. If a user clicks on one of those buttons, the text from the corresponding document will be displayed in the window. Very basic. The text documents range in length from 1000 to 5000 words, and they need basic formatting (bold, italic, maybe one or two font choices). My question is this: what is the best way to store and display long blocks of formatted text, using Quickly? There seems to be a few options: (1) I could load the text blocks into long python strings, (2) I could load the text from text files, or (3) I could somehow copy and paste the formatted text into Glade. In the first two options, I'm not sure how I would format the text (add italic and bold, for instance) once it was loaded. I have experience with PHP/MySQL/HTML/CSS/Javascript, but I'm new to Python. Any help would be appreciated.

    Read the article

  • What are some practical uses of the "new" modifier in C# with respect to hiding?

    - by Joel Etherton
    A co-worker and I were looking at the behavior of the new keyword in C# as it applies to the concept of hiding. From the documentation: Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier. We've read the documentation, and we understand what it basically does and how it does it. What we couldn't really get a handle on is why you would need to do it in the first place. The modifier has been there since 2003, and we've both been working with .Net for longer than that and it's never come up. When would this behavior be necessary in a practical sense (e.g.: as applied to a business case)? Is this a feature that has outlived its usefulness or is what it does simply uncommon enough in what we do (specifically we do web forms and MVC applications and some small factor WinForms and WPF)? In trying this keyword out and playing with it we found some behaviors that it allows that seem a little hazardous if misused. This sounds a little open-ended, but we're looking for a specific use case that can be applied to a business application that finds this particular tool useful.

    Read the article

  • Today's Links (6/27/2011)

    - by Bob Rhubart
    2011 Entrepreneurs of the Year, Northern California Region Drake Martinet reports on the new batch of entrepreneurs joining the ranks of Oracle CEO Larry Ellison, Yahoo CEO Carol Bartz and eBay co-founder Pierre Omidyar as the Norther California Region winners of Ernst & Young's Entrepreneurs of the Year awards. Technical Article: Caching Strategies for Oracle Service Bus 11g William Markito Oliveira illustrates how the right caching strategy can make a big difference in application performance. Kscope 11 - Day 1 and 2 Oracle ACE Director Markus Eisele checks in from Long Beach. Kaleidoscope 2011: Sunday’s Symposium And so does Oracle ACE Director Marco Gralike. Yet another GlassFish 3.1.1 promoted build | The Aquarium "This version was carefully designed to be highly compatible with the previous 3.x versions," says Alexis, "thus leaving you with little reasons not to upgrade as soon as it comes out this summer." Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now! "The NoSQL databases are not intended to be a replacement for the mainstream RDBMS," says Arun Gupta. I have a performance problem | Alan Hargreaves Good (and entertaining) advice from an Australian Solaris and Network Domain TSC* Principal Field Technologist.

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Business Forces: SOA Adoption

    The only constant in today’s business environment is change. Businesses that continuously foresee change and adapt quickly will gain market share and increased growth. In our ever growing global business environment change is driven by data in regards to collecting, maintaining, verifying and distributing data.  Companies today are made and broken over data. Would anyone still use Google if they did not have one of the most accurate search indexes on the internet? No, because their value is in their data and the quality of their data. Due to the increasing focus on data, companies have been adopting new methodologies for gaining more control over their data while attempting to reduce the costs of maintaining it. In addition, companies are also trying to reduce the time it takes to analyze data in regards to various market forces to foresee changes prior to them actually occurring.   Benefits of Adopting SOA Services can be maintained separately from other services and applications so that a change in one service will only affect itself and client services or applications. The advent of services allows for system functionality to be distributed across a network or multiple networks. The costs associated with maintain business functionality is much higher in standard application development over SOA due to the fact that one Services can be maintained and shared to other applications instead of multiple instances of business functionality being duplicated via hard coding in to several applications. When multiple applications use a single service for a specific business function then the all of the data being processed will be consistent in terms of quality and accuracy through the applications. Disadvantages of Adopting SOA Increased initial costs and timelines are associated with SOA due to the fact that services need to be created as well as applications need to be modified to call the services In order for an SOA project to be successful the project must obtain company and management support in order to gain the proper exposure, funding, and attention. If SOA is new to a company they must also support the proper training in order for the project to be designed, and implemented correctly. References: Tews, R. (2007). Beyond IT: Exploring the Business Value of SOA. SOA Magazine Issue XI.

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Impact of Service Oriented Architecture (SOA) on Business and IT Operations

    The impact of Service Oriented Architecture (SOA) on business and IT operations varies from company to company. I think more and more companies are starting to view SOA as just another technology that they can incorporate in an existing or new system. One of the driving factors in using SOA is the reduction in maintenance costs and decrease in the time needed to bring products to market. The reductions in costs, and reduced turnaround time can be directly converted in to increased profitability due to less expenditures that are needed in order to maintain or create new systems. My personal perspective on SOA is that it is great for what it is actually intended to do. SOA allows systems to be distributed across networks or even the world while ensuring enterprise processing consistency, data integrity and preventing code duplication. This being said a lot of preparation and work goes into properly designing and implementing an SOA especially if an enterprise wants to take full advantage of its benefits. Even though SOA has recently gotten a lot of hype about its benefits it does not a perfect fit for all situations. At the end of the day SOA is just another tool in my tool belt that I can pull from to create solutions that meet the business’s needs. Based on current industry trends SOA appears to be a very solid technology to use moving forward, especially as more and more companies shift towards cloud based computing. It is important to remember that SOA is one of many technologies that can be used in creating business solutions and I think more time will be spent in the future evaluating if SOA is the right technology for a solution once the initial hype of SOA has calmed down.

    Read the article

  • ????????!??????APEX??????????????

    - by Yuichi.Hayashi
    apex.oracle.com ????Oracle APEX?Web?????????????????????????????·???????????????? ???????????????????????????????????????·???????????????? ??Web?????????????????????????Amazon EC2??????????????????Oracle APEX??????????????????????????????????????????????? ???????????????????? (1)apex.oracle.com??Sign Up ????????????? (2) Next (3) ???????????????Next? (4) Workspace?(???????)??????Workspace????????????????Next? (?) Workspace: OracleDirect (5) Workspace?????????????? ???????????????????????????????????????????????????????????????????? ???????????????Next? (?) New schema to create: DIRECT Initial Space Allocation (MB): 25 (6) ???????????????????(???????To Try ??????????)?Next? (7) ???????????Code???????????????Submit Reqest?????? (8) ?????????????????????? ?????????????????????????????????? ?????: Approved: account request for ···· ··· ?????: Your request for an account has been approved. Workspace :ORACLEDIRECT User ID :[email protected] Please click on the link: http://apex.oracle.com/pls/apex/f?·········· to complete the approval process and receive your credentials. (9) ???????????????????????????? ?????:???: Yuichi Hayashi??????·????? ?????: ?????·?????????????? ???????: ORACLEDIRECT ????ID: [email protected] ?????: ******** ??????http://apex.oracle.com/pls/apex/?????????? (10) ??????????http//apex.oracle.com/pls/apex/?????????? (11) ???????????? (12) ??????????? ???????????????(???????)?????????????????????? APEX(Oracle Application Express)????~??????????????????????

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

  • How do I connect to SDF on a Mobile device from desktop application?

    - by pitprog
    C# WinForms .Net 3.5 to SQL CE 3.5 on Mobile 6.1 Device I'd like to make a connection from a desktop application to a SDF database on my Windows Mobile device while it's connected via ActiveSync. Visual Studio lets me create a Data Connection to my device. The connections tests OK and I can view the data in the database using Visual Studio. I then create a form and try to fill a DataGridView. When I run the program I get an error that the path to the data base is not valid. How am I supposed to specify the Mobile device path in the connection string? In my App.Config, I've tried variations on the path, but none of them work: connectionString="Data Source=Mobile Device\Program Files\SqlCeViaActiveSync\Orders.sdf" connectionString="Data Source=\Mobile Device\Program Files\SqlCeViaActiveSync\Orders.sdf" connectionString="Data Source=Program Files\SqlCeViaActiveSync\Orders.sdf" connectionString="Data Source=\Program Files\SqlCeViaActiveSync\Orders.sdf" The full connection string section looks like this: <connectionStrings> <add name="SqlCeViaActiveSync.Properties.Settings.OrdersConnectionString" connectionString="Data Source=Mobile Device\Program Files\SqlCeViaActiveSync\Orders.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> </connectionStrings> Also, I did make a reference to Microsoft.SqlServerCe.Client, as I found a few articles that mentioned it was necessary. Can anyone point me to some recent articles/samples or let me know what I'm doing wrong? Thanks!

    Read the article

  • How can I use System.Web.Caching.Cache in a Console application?

    - by Ron Klein
    Context: .Net 3.5, C# I'd like to have caching mechanism in my Console application. Instead of re-inventing the wheel, I'd like to use System.Web.Caching.Cache (and that's a final decision, I can't use other caching framework, don't ask why). However, it looks like System.Web.Caching.Cache is supposed to run only in a valid HTTP context. My very simple snippet looks like this: using System; using System.Web.Caching; using System.Web; Cache c = new Cache(); try { c.Insert("a", 123); } catch (Exception ex) { Console.WriteLine("cannot insert to cache, exception:"); Console.WriteLine(ex); } and the result is: cannot insert to cache, exception: System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Caching.Cache.Insert(String key, Object value) at MyClass.RunSnippet() So obviously, I'm doing something wrong here. Any ideas? Update: +1 to most answers, getting the cache via static methods is the correct usage, namely HttpRuntime.Cache and HttpContext.Current.Cache. Thank you all!

    Read the article

  • "Warning reaching end of non-void fuction" with Multiple Sections that pull in multiple CustomCells

    - by Newbyman
    I'm getting "Reaching end of non-void function" warning, but don't have anything else to return for the compiler. How do I get around the warning?? I'm using customCells to display a table with 3 Sections. Each CustomCell is different, linked with another viewcontroller's tableview within the App, and is getting its data from its individual model. Everything works great in the Simulator and Devices, but I would like to get rid of the warning that I have. It is the only one I have, and it is pending me from uploading to App Store!! Within the - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {, I have used 3 separate If() statements-(i.e.==0,==1,==2) to control which customCells are displayed within each section throughout the tableview's cells. Each of the customCells were created in IB, pull there data from different models, and are used with other ViewController tableViews. At the end of the function, I don't have a "cell" or anything else to return, because I already specified which CustomCell to return within each of the If() statements. Because each of the CustomCells are referenced through the AppDelegate, I can not set up an empty cell at the start of the function and just set the empty cell equal to the desired CustomCell within each of the If() statements, as you can for text, labels, etc... My question is not a matter of fixing code within the If() statements, unless it is required. My Questions is in "How to remove the warning for reaching end of non-void function-(cellForRowAtIndexPath:) when I have already returned a value for every possible case: if(section == 0); if(section == 1); and if(section == 2). *Code-Reference: The actual file names were knocked down for simplicity, (section 0 refers to M's, section 1 refers to D's, and section 2 refers to B's). Here is a sample Layout of the code: //CELL FOR ROW AT INDEX PATH: -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { //Reference to the AppDelegate: MyAppDelegate *appDelegate = (MyAppDelegate *)[[UIApplication sharedApplication] delegate]; //Section 0: if(indexPath.section == 0) { static NSString *CustomMCellIdentifier = @"CustomMCellIdentifier"; MCustomCell *mCell = (MCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomMCellIdentifier]; if (mCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"MCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[MCustomCell class]]) mCell = (MCustomCell *)oneObject; } //Grab the Data for this item: M *mM = [appDelegate.mms objectAtIndex:indexPath.row]; //Set the Cell [mCell setM:mM]; mCell.selectionStyle =UITableViewCellSelectionStyleNone; mCell.root = tableView; return mCell; } //Section 1: if(indexPath.section == 1) { static NSString *CustomDCellIdentifier = @"CustomDCellIdentifier"; DCustomCell *dCell = (DCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomDaddyCellIdentifier]; if (dCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"DCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[DCustomCell class]]) dCell = (DCustomCell *)oneObject; } //Grab the Data for this item: D *dD = [appDelegate.dds objectAtIndex:indexPath.row]; //Set the Cell [dCell setD:dD]; //Turns the Cell's SelectionStyle Blue Highlighting off, but still permits the code to run! dCell.selectionStyle =UITableViewCellSelectionStyleNone; dCell.root = tableView; return dCell; } //Section 2: if(indexPath.section == 2) { static NSString *CustomBCellIdentifier = @"CustomBCellIdentifier"; BCustomCell *bCell = (BCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomBCellIdentifier]; if (bCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"BCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[BCustomCell class]]) bCell = (BCustomCell *)oneObject; } //Grab the Data for this item: B *bB = [appDelegate.bbs objectAtIndex:indexPath.row]; //Set the Cell [bCell setB:bB]; bCell.selectionStyle =UITableViewCellSelectionStyleNone; bCell.root = tableView; return bCell; } //** Getting Warning "Control reaches end of non-void function" //Not sure what else to "return ???" all CustomCells were specified within the If() statements above for their corresponding IndexPath.Sections. } Any Suggestions ??

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >