Search Results

Search found 21364 results on 855 pages for 'service bus'.

Page 228/855 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Fiddler - A useful free tool for checking Web Services (and web site) traffic

    - by TATWORTH
    Recently I had reason to be very glad that I had Fiddler. I was able to record some web service traffic and identify a problem as Fiddler can record both the call to a web service and response from the web service. By seeing the actual data traffic I was able to resolve a problem found in testing in less time than it has taken me to write this blog entry! This tool is also useful for studying general web site traffic. Fiddler is available from http://www.fiddler2.com/fiddler2/ There are training videos available on the above site.

    Read the article

  • Continous Delivery TFS

    - by swapneel
    Is it possible to achieve Continuous Delivery using TFS e.g. Windows Service? There are 1000 posts on how to use msdeploy with TFS for WEB projects. I am trying to understand why there are no resources such as blogs, articles, msdn or best practises for Continuous Delivery for Windows service using TFS. I am not sure tow to achieve the following without any working reference materials. This is so frustrating. Archive existing codebase on Remote server for Service as well for Web project not on Integration Server please! How To Stop services on Remote server not on Integration Server Copy New code Base on Remote Server Start Services

    Read the article

  • New Oracle Cloud support in OEPE

    - by gstachni
    Oracle Enterprise Pack for Eclipse 12c (12.1.1.1.1) includes updated support for development with the Java Cloud Service. Users can now do iterative development against their Java Cloud Service instance in addition to testing against local and remote WebLogic Server installations. Some details of the cloud tools are below: Templates and wizards to create projects and server configurations for Oracle Java Cloud Service Develop applications to run and deploy to the cloud, including Oracle ADF. Check cloud server runtime and deployment logs in new log analyzers Test applications before deployment with integrated whitelist scans. Whitelist tools support as-you-type validation, project build validation, and on demand scans to highlight coding violations. Errors are reported in application source, the Problems view, and a new Whitelist violations view. Access the Oracle Public Cloud administrative consoles directly from within Eclipse.

    Read the article

  • Custom .NET apps and clustering

    - by Ahmed ilyas
    So for a clustered environment - how would this work with your apps? what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • Disable automatic starting of sshd?

    - by b.long
    Simple question here; what's the correct way to stop the sshd service from starting when the OS boots ? I'm not sure if this answer is correct, so I'm hoping some guru(s) can help me out! What I'd like is a configuration that (after boot) allows me to start the service using sudo service ssh start when necessary. Version info: me@home:~$ ssh -V OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 me@home:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise

    Read the article

  • Moving between sites using SAML

    - by System Down
    I'm tasked with developing an SSO system, and was guided towards using the SAML spec. After some research I think understand the interaction between a Service Provider and an ID Provider and how a user's identity is confirmed. But what happens when I redirect the user to another Service Provider? How do I ascertain the user's identity there? Do I send his SAML assertion tokens along with the redirect request? Or does the second Service Provider need to contact the ID Provider all over again?

    Read the article

  • How to remove geoclue-master?

    - by dunderhead
    Looking in System Monitor I saw a process called geoclue-master. As far as I can tell, this reports my precise location to applications and the internet. This computer is sitting in the same office all the time so I have no need whatsoever for such a thing, and would rather have the memory and processor cycles that this unnecessary service uses up - however small - free for things that I do need and want. I could not find instructions for disabling and removing this service so I renamed the file, but then the calendar disappeared from the notification area. Is there a way to remove this service but leave the calendar? I'm using 11.10.

    Read the article

  • Layering Design Pattern in Java clean code style

    - by zeraDev
    As a Java developer, I am developing trying to use the clean code rules. But in my team we are facing a concrete problem: We have a business layer offering a service called "createObject", this service makes a lot of operation which can result to problem. E.g: parentObjectDontExist, objectAlreadyExist, dontHaveAuthorizationToCreate, operationFailed... and we want the UI using this service to display different information messages depending which error occurred. In old java dev, we should have create all signed exception type and throw it in createObject. As Clean code says, it is forbidden to use Exception for business logic AND signed exceptions are evil... Why not...But i don't know how to solved this problem and i don't want to use return code. How do you do? Thanks for youre experience return.

    Read the article

  • Is there a better way to handle data abstraction in this example?

    - by sigil
    I'm building an application that retrieves Sharepoint list data via a web service SPlists.Lists. To create an instance of the web service, I have the following class: class SharepointServiceCreator { public SPlists.Lists createService() { listsService.Url = "http://wss/sites/SPLists/_vti_bin/lists.asmx"; listsService.Credentials = System.Net.CredentialCache.DefaultCredentials; SPlists.Lists listsService=new SPlists.Lists(); } } I'm concerned that this isn't good OOP abstraction, though, because in order to create this service elsewhere in my application, I would need the following code: class someClass { public void someMethod() { SharepointServiceCreator s=new SharepointServiceCreator() SPlists.Lists listService=s.createService() } } Having to use declare the instance of listService in someMethod as type SPlists.Lists seems wrong, because it means that someClass needs to know about how SharepointServiceCreator is implemented. Or is this ok?

    Read the article

  • Network doesn't work after upgrading to 13.10

    - by Inkbug
    I upgraded from 13.04 to 13.10, and a number of things stopped working. Most importantly, I don't have internet access anymore. When I go to the network settings, I receive the following error: The system network services are not compatible with this version. The only settings that show up in the network settings are the proxy settings. I tried sudo service network-manager start and sudo service network-manager restart, but it told me the service was already running. In addition, the menu bar is missing all of the indicators (such as keyboard layout, sound, network, clock, and the gear icon). Is this related or a separate issue?

    Read the article

  • Do I own copyright of program I made in own time?

    - by Dave Mess
    I created a software package that aids electrical engineers with common calculations used on site (substations to be specific). I created the package in my own time, without being asked and without guidance. The package is now widely used within my company and I intend to distribute it nationally. Do I own the copyright? My contract does state that all work produced is theirs, but this was outside of work and outside of my 'scope of work'. My company is mainly a civil and construction company and had no influence in the creation of the program. From comments: This is the paragraph "During the course of your service, you will disclose to the company all information, formulae, processes, inventions or improvements which you have learned, discovered or evolved during the course of your service or in connection with the business of the company and will sign any necessary documents to enable the company to obtain patent protection whether still in the company service or not. " They are taking it seriously and have got their lawyers.

    Read the article

  • which one is the safe site to buy cheap wildstar gold?

    - by user50866
    Wsoplat.com is a professional online shop which provide the cheapest wildstar gold, the fastest delivery, the best 24/7 online service for the players. Secure Guarantee cheap wildstar gold offered by wsoplat.com are reliable sourced, safe and honored. Lowest Price We are constantly trying to offer the lowest prices on Wildstar Gold for our loyal customers. Covenient shopping procedure & secure delivery method We are very experienced in this business. Every order is processed both smoothly and efficiently. Friendly Service and Instant Delivery Wsoplat 's delivery department work 24/7/365, We have professional and friendly customer service operators, they can help you buy wildstar gold, accounts, items and more. 5% discount code to buy WildStar gold in wsoplat - WSOPLAT http://www.wsoplat.com/

    Read the article

  • Search and Replace in MVC

    - by danip
    What would be a good MVC/OOP/GRASP/SOLID structure for a search/replace functionality. Methods: search/searchNext/replace/replaceAll. I'm interested only in the PHP arhitecture and how a professional developer would implement this in it's OWN FRAMEWORK. What names would you use for the classes? What subfolders would you used in your MODEL folder? How would you connect the MODELS/CONTROLLER? This is just a arhitecture question to understand better the principles of good OOP in practice. My current implementation is very simplistic using a service model: /controller/SearchReplaceController.php /models/services/SearchReplaceService.php The problem with this is I know I'm breaking SRP in the service but I found this somehow acceptable. Also creating a service does not feel like the best solution for this.

    Read the article

  • Should Business Interfaces be part of the Model layer?

    - by Mik378
    In an oriented-services enterprise application, isn't it an antipattern to mix Service APIs (containing interface that external users depends on) with Model objects (entities, custom exceptions objects etc...) ? According to me, Services should only depends on Model layer but never mixed with it. In fact, my colleague told me that it doesn't make sense to separate it since client need both. (model and service interfaces) But I notice that everytime a client asks for some changes, like adding a new method in some interface (means a new service), Model layer has to be also delivered... Thus, client who has not interested by this "addition" is constrained to be concerned by this update of Model... and in a large enterprise application, this kind of delivery is known to be very risked... What is the best practice ? Separate services(only interfaces so) and model objects or mix it ?

    Read the article

  • Will Google follow HTML refresh?

    - by yasar11732
    I want to move my current Tumblr blog to static HTML blog. Currently I am using a custom domain, I am planning on doing the move when Google sees domain name change. I am considering two options: Buying a hosting service Using GitHub pages Buying a hosting service would probably mean I am going to pay for lots of things that I don't need like PHP, MySQL, e-mail service etc. On the other hand, if I use GitHub pages, I can't use .htaccess file to make 301 redirects. I want to change my URL structure and this is important to me. I was wondering if I use: <meta http-equiv=refresh content="0; url=http://example.com/newurl" /> Would Google see it as 301 redirect, so that I won't use my search engine value?

    Read the article

  • ????RAC??????????????????????????|WebLogic Channel|??????

    - by ???02
    ????????Oracle Database??????????????Oracle Real Application Clusters(RAC)????????????????????????????????????????????RAC????Oracle WebLogic Server????????????WebLogic Suite 11g?????????WebLogic Server/RAC???????Active GridLink for RAC????????????????????·??????????Frances Zhao?(Oracle Application Server ??????·?????·?????)???????????2??????????(???)?Exalogic???RAC?????WebLogic Suite??????? Oracle Fusion Middleware?????????WebLogic Server?????????????????????????????????????????????/?????????????????????????????????????Fusion Middleware??????Oracle Database??????????????????????????????????????????????? ????????·???????????????????????????????????????????????????????????????????????Oracle Exalogic Elastic Cloud????WebLogic Server???????????????Exalogic???????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????? ??Exalogic?????WebLogic Server?Oracle Database???????(RAC)???????????????????????Active GridLink for RAC????????????????RAC???????????????????????????????????????Active GridLink for RAC???????????WebLogic Server???????????·??????WebLogic Suite 11g????????????????????????????RAC???????????Exalogic????????????????WebLogic Server??RAC???????????RAC???????????????? Active GridLink for RAC?????????????????????Oracle Database(RAC)?WebLogic Server???????????????????????2???????????????????????????????????????????????????Active GridLink for RAC??????????????????????????????????????????????RAC Notification Service??????????????????????? Active GridLink for RAC???????????WebLogic Server????RAC?????????????????????????????????????????·????RAC?????????????????·???????RAC???????????????????????????RAC????????????????RAC???RAC?????????????????????????????????????????????????????????????? ????Active GridLink for RAC???????????RAC??????????????1??????????????????WebLogic Server?Active GridLink for RAC????RAC???????????RAC?????????????????????????????????????????????????????RAC???????????????????????????????????????????????????????????????·????????????????????????????Active GridLink for RAC?????????????????·?????RAC??????????????????????Oracle Database/RAC????????????????? ??????????????????Oracle Database 11g R2??????????????Oracle Database???????????????????SCAN(Single Client Access Name)??????????????????RAC??????????????????????Active GridLink for RAC????????WebLogic Server?????????????????????????????????????????????Oracle Data Guard????????????????????????·??????????WebLogic Server?RAC??????????????????????? ??Active GridLink for RAC????????????????????Oracle Notification Service??????????????????????·????????????????????????????????????????????????????????????????????????????????????Active GridLink for RAC?????????????????WebLogic Server Multi Data Sources?????????????????RAC??????????????????????Multi Data Sources??????????????????????????????????RAC???????????TCP/IP????????????????????????????????????????????????? ????Active GridLink for RAC?????Oracle Database?????????????Oracle Notification Service?????????????WebLogic Server???????????????????????????????????????????????????????????????????????????????????????Active GridLink for RAC?????Oracle Database/RAC????????????????????·???????WebLogic Server???????????????????????????????????Active GridLink for RAC???????????????????????

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part II

    - by mbridge
    In previous post, I have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } } View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. - jquery-ui.js - jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • in MSSQL Server 2005 Dev Edition, I faced index corruption

    - by tranhuyhung
    Hi all, When running stored procedures in MSSQL Server, I found it failed and the DBMS (MSSQL Server 2005 Dev Edition) notified that some indexes are corrupted. Please advice me, here below is DBCC logs: DBCC results for 'itopup_dev'. Service Broker Msg 9675, State 1: Message Types analyzed: 14. Service Broker Msg 9676, State 1: Service Contracts analyzed: 6. Service Broker Msg 9667, State 1: Services analyzed: 3. Service Broker Msg 9668, State 1: Service Queues analyzed: 3. Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0. Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0. Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0. DBCC results for 'sys.sysrowsetcolumns'. There are 1148 rows in 14 pages for object "sys.sysrowsetcolumns". DBCC results for 'sys.sysrowsets'. There are 187 rows in 2 pages for object "sys.sysrowsets". DBCC results for 'sysallocunits'. There are 209 rows in 3 pages for object "sysallocunits". DBCC results for 'sys.sysfiles1'. There are 2 rows in 1 pages for object "sys.sysfiles1". DBCC results for 'sys.syshobtcolumns'. There are 1148 rows in 14 pages for object "sys.syshobtcolumns". DBCC results for 'sys.syshobts'. There are 187 rows in 2 pages for object "sys.syshobts". DBCC results for 'sys.sysftinds'. There are 0 rows in 0 pages for object "sys.sysftinds". DBCC results for 'sys.sysserefs'. There are 209 rows in 1 pages for object "sys.sysserefs". DBCC results for 'sys.sysowners'. There are 15 rows in 1 pages for object "sys.sysowners". DBCC results for 'sys.sysprivs'. There are 135 rows in 1 pages for object "sys.sysprivs". DBCC results for 'sys.sysschobjs'. There are 817 rows in 21 pages for object "sys.sysschobjs". DBCC results for 'sys.syscolpars'. There are 2536 rows in 71 pages for object "sys.syscolpars". DBCC results for 'sys.sysnsobjs'. There are 1 rows in 1 pages for object "sys.sysnsobjs". DBCC results for 'sys.syscerts'. There are 0 rows in 0 pages for object "sys.syscerts". DBCC results for 'sys.sysxprops'. There are 12 rows in 4 pages for object "sys.sysxprops". DBCC results for 'sys.sysscalartypes'. There are 27 rows in 1 pages for object "sys.sysscalartypes". DBCC results for 'sys.systypedsubobjs'. There are 0 rows in 0 pages for object "sys.systypedsubobjs". DBCC results for 'sys.sysidxstats'. There are 466 rows in 15 pages for object "sys.sysidxstats". DBCC results for 'sys.sysiscols'. There are 616 rows in 6 pages for object "sys.sysiscols". DBCC results for 'sys.sysbinobjs'. There are 23 rows in 1 pages for object "sys.sysbinobjs". DBCC results for 'sys.sysobjvalues'. There are 1001 rows in 376 pages for object "sys.sysobjvalues". DBCC results for 'sys.sysclsobjs'. There are 14 rows in 1 pages for object "sys.sysclsobjs". DBCC results for 'sys.sysrowsetrefs'. There are 0 rows in 0 pages for object "sys.sysrowsetrefs". DBCC results for 'sys.sysremsvcbinds'. There are 0 rows in 0 pages for object "sys.sysremsvcbinds". DBCC results for 'sys.sysxmitqueue'. There are 0 rows in 0 pages for object "sys.sysxmitqueue". DBCC results for 'sys.sysrts'. There are 1 rows in 1 pages for object "sys.sysrts". DBCC results for 'sys.sysconvgroup'. There are 0 rows in 0 pages for object "sys.sysconvgroup". DBCC results for 'sys.sysdesend'. There are 0 rows in 0 pages for object "sys.sysdesend". DBCC results for 'sys.sysdercv'. There are 0 rows in 0 pages for object "sys.sysdercv". DBCC results for 'sys.syssingleobjrefs'. There are 317 rows in 2 pages for object "sys.syssingleobjrefs". DBCC results for 'sys.sysmultiobjrefs'. There are 3607 rows in 37 pages for object "sys.sysmultiobjrefs". DBCC results for 'sys.sysdbfiles'. There are 2 rows in 1 pages for object "sys.sysdbfiles". DBCC results for 'sys.sysguidrefs'. There are 0 rows in 0 pages for object "sys.sysguidrefs". DBCC results for 'sys.sysqnames'. There are 91 rows in 1 pages for object "sys.sysqnames". DBCC results for 'sys.sysxmlcomponent'. There are 93 rows in 1 pages for object "sys.sysxmlcomponent". DBCC results for 'sys.sysxmlfacet'. There are 97 rows in 1 pages for object "sys.sysxmlfacet". DBCC results for 'sys.sysxmlplacement'. There are 17 rows in 1 pages for object "sys.sysxmlplacement". DBCC results for 'sys.sysobjkeycrypts'. There are 0 rows in 0 pages for object "sys.sysobjkeycrypts". DBCC results for 'sys.sysasymkeys'. There are 0 rows in 0 pages for object "sys.sysasymkeys". DBCC results for 'sys.syssqlguides'. There are 0 rows in 0 pages for object "sys.syssqlguides". DBCC results for 'sys.sysbinsubobjs'. There are 0 rows in 0 pages for object "sys.sysbinsubobjs". DBCC results for 'TBL_BONUS_TEMPLATES'. There are 0 rows in 0 pages for object "TBL_BONUS_TEMPLATES". DBCC results for 'TBL_ROLE_PAGE_GROUP'. There are 18 rows in 1 pages for object "TBL_ROLE_PAGE_GROUP". DBCC results for 'TBL_BONUS_LEVELS'. There are 0 rows in 0 pages for object "TBL_BONUS_LEVELS". DBCC results for 'TBL_SUPERADMIN'. There are 1 rows in 1 pages for object "TBL_SUPERADMIN". DBCC results for 'TBL_ADMIN_ROLES'. There are 11 rows in 1 pages for object "TBL_ADMIN_ROLES". DBCC results for 'TBL_ADMIN_USER_ROLE'. There are 42 rows in 1 pages for object "TBL_ADMIN_USER_ROLE". DBCC results for 'TBL_BONUS_CALCULATION_HISTORIES'. There are 0 rows in 0 pages for object "TBL_BONUS_CALCULATION_HISTORIES". DBCC results for 'TBL_MERCHANT_MOBILES'. There are 0 rows in 0 pages for object "TBL_MERCHANT_MOBILES". DBCC results for 'TBL_ARCHIVE_EXPORTED_SOFTPINS'. There are 16030918 rows in 35344 pages for object "TBL_ARCHIVE_EXPORTED_SOFTPINS". DBCC results for 'TBL_ARCHIVE_LOGS'. There are 280 rows in 2 pages for object "TBL_ARCHIVE_LOGS". DBCC results for 'TBL_ADMIN_USERS'. There are 29 rows in 1 pages for object "TBL_ADMIN_USERS". DBCC results for 'TBL_SYSTEM_ALERT_GROUPS'. There are 4 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUPS". DBCC results for 'TBL_EXPORTED_TRANSACTIONS'. There are 7848 rows in 89 pages for object "TBL_EXPORTED_TRANSACTIONS". DBCC results for 'TBL_SYSTEM_ALERTS'. There are 968 rows in 9 pages for object "TBL_SYSTEM_ALERTS". DBCC results for 'TBL_SYSTEM_ALERT_GROUP_MEMBERS'. There are 1 rows in 1 pages for object "TBL_SYSTEM_ALERT_GROUP_MEMBERS". DBCC results for 'TBL_ESTIMATED_TIME'. There are 11 rows in 1 pages for object "TBL_ESTIMATED_TIME". DBCC results for 'TBL_SYSTEM_ALERT_MEMBERS'. There are 0 rows in 1 pages for object "TBL_SYSTEM_ALERT_MEMBERS". DBCC results for 'TBL_COMMISSIONS'. There are 10031 rows in 106 pages for object "TBL_COMMISSIONS". DBCC results for 'TBL_CATEGORIES'. There are 3 rows in 1 pages for object "TBL_CATEGORIES". DBCC results for 'TBL_SERVICE_PROVIDERS'. There are 11 rows in 1 pages for object "TBL_SERVICE_PROVIDERS". DBCC results for 'TBL_CATEGORY_SERVICE_PROVIDER'. There are 11 rows in 1 pages for object "TBL_CATEGORY_SERVICE_PROVIDER". DBCC results for 'TBL_PRODUCTS'. There are 73 rows in 6 pages for object "TBL_PRODUCTS". DBCC results for 'TBL_MERCHANT_KEYS'. There are 291 rows in 30 pages for object "TBL_MERCHANT_KEYS". DBCC results for 'TBL_POS_UNLOCK_KEYS'. There are 0 rows in 0 pages for object "TBL_POS_UNLOCK_KEYS". DBCC results for 'TBL_POS'. There are 0 rows in 0 pages for object "TBL_POS". DBCC results for 'TBL_IMPORT_BATCHES'. There are 3285 rows in 84 pages for object "TBL_IMPORT_BATCHES". DBCC results for 'TBL_IMPORT_KEYS'. There are 2 rows in 1 pages for object "TBL_IMPORT_KEYS". DBCC results for 'TBL_PRODUCT_COMMISSION_TEMPLATES'. There are 634 rows in 4 pages for object "TBL_PRODUCT_COMMISSION_TEMPLATES". DBCC results for 'TBL_POS_SETTLE_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_SETTLE_TRANSACTIONS". DBCC results for 'TBL_CHANGE_KEY_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_CHANGE_KEY_SOFTPINS". DBCC results for 'TBL_POS_RETURN_TRANSACTIONS'. There are 0 rows in 0 pages for object "TBL_POS_RETURN_TRANSACTIONS". DBCC results for 'TBL_POS_SOFTPINS'. There are 0 rows in 0 pages for object "TBL_POS_SOFTPINS". DBCC results for 'TBL_POS_MENUS'. There are 0 rows in 0 pages for object "TBL_POS_MENUS". DBCC results for 'TBL_COMMISSION_TEMPLATES'. There are 23 rows in 1 pages for object "TBL_COMMISSION_TEMPLATES". DBCC results for 'TBL_DOWNLOAD_TRANSACTIONS'. There are 170820 rows in 1789 pages for object "TBL_DOWNLOAD_TRANSACTIONS". DBCC results for 'TBL_IMPORT_TEMP_SOFTPINS'. There are 0 rows in 1 pages for object "TBL_IMPORT_TEMP_SOFTPINS". DBCC results for 'TBL_REGIONS'. There are 2 rows in 1 pages for object "TBL_REGIONS". DBCC results for 'TBL_SOFTPINS'. There are 9723677 rows in 126611 pages for object "TBL_SOFTPINS". DBCC results for 'sysdiagrams'. There are 0 rows in 0 pages for object "sysdiagrams". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTIONS'. There are 9302 rows in 53 pages for object "TBL_SYNCHRONIZE_TRANSACTIONS". DBCC results for 'TBL_SALEMEN'. There are 32 rows in 1 pages for object "TBL_SALEMEN". DBCC results for 'TBL_RESERVATION_SOFTPINS'. There are 131431 rows in 1629 pages for object "TBL_RESERVATION_SOFTPINS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_ITEMS'. There are 5345 rows in 16 pages for object "TBL_SYNCHRONIZE_TRANSACTION_ITEMS". DBCC results for 'TBL_ACCOUNTS'. There are 1 rows in 1 pages for object "TBL_ACCOUNTS". DBCC results for 'TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN'. There are 821988 rows in 2744 pages for object "TBL_SYNCHRONIZE_TRANSACTION_SOFTPIN". *DBCC results for 'TBL_EXPORTED_SOFTPINS'. Msg 8928, Level 16, State 1, Line 1 Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data): Page (1:677314) could not be processed. See other errors for details. Msg 8939, Level 16, State 7, Line 1 Table error: Object ID 1716917188, index ID 1, partition ID 72057594046119936, alloc unit ID 72057594050838528 (type In-row data), page (1:677314). Test (m_freeData = PAGEHEADSIZE && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 15428 and 7240. There are 2267937 rows in 6133 pages for object "TBL_EXPORTED_SOFTPINS". CHECKDB found 0 allocation errors and 2 consistency errors in table 'TBL_EXPORTED_SOFTPINS' (object ID 1716917188).* DBCC results for 'TBL_DOWNLOAD_SOFTPINS'. There are 7029404 rows in 17999 pages for object "TBL_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_MERCHANT_BALANCE_CREDIT_PAID'. There are 0 rows in 0 pages for object "TBL_MERCHANT_BALANCE_CREDIT_PAID". DBCC results for 'TBL_ARCHIVE_SOFTPINS'. There are 44015040 rows in 683692 pages for object "TBL_ARCHIVE_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_LOGS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_LOGS". DBCC results for 'TBL_BLOCK_BATCHES'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCHES". DBCC results for 'TBL_BLOCK_BATCH_SOFTPIN'. There are 396 rows in 1 pages for object "TBL_BLOCK_BATCH_SOFTPIN". DBCC results for 'TBL_MERCHANTS'. There are 290 rows in 22 pages for object "TBL_MERCHANTS". DBCC results for 'TBL_DOWNLOAD_TRANSACTION_ITEMS'. There are 189296 rows in 1241 pages for object "TBL_DOWNLOAD_TRANSACTION_ITEMS". DBCC results for 'TBL_BLOCK_BATCH_CONDITIONS'. There are 23 rows in 1 pages for object "TBL_BLOCK_BATCH_CONDITIONS". DBCC results for 'TBL_SP_ADVERTISEMENTS'. There are 6 rows in 1 pages for object "TBL_SP_ADVERTISEMENTS". DBCC results for 'TBL_SERVER_KEYS'. There are 1 rows in 1 pages for object "TBL_SERVER_KEYS". DBCC results for 'TBL_ARCHIVE_DOWNLOAD_SOFTPINS'. There are 27984122 rows in 60773 pages for object "TBL_ARCHIVE_DOWNLOAD_SOFTPINS". DBCC results for 'TBL_ACCOUNT_BALANCE_REQUESTS'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_BALANCE_REQUESTS". DBCC results for 'TBL_MERCHANT_TERMINALS'. There are 633 rows in 4 pages for object "TBL_MERCHANT_TERMINALS". DBCC results for 'TBL_SP_PREFIXES'. There are 6 rows in 1 pages for object "TBL_SP_PREFIXES". DBCC results for 'TBL_DIRECT_TOPUP_TRANSACTIONS'. There are 43 rows in 1 pages for object "TBL_DIRECT_TOPUP_TRANSACTIONS". DBCC results for 'TBL_MERCHANT_BALANCE_REQUESTS'. There are 19367 rows in 171 pages for object "TBL_MERCHANT_BALANCE_REQUESTS". DBCC results for 'TBL_ACTION_LOGS'. There are 133714 rows in 1569 pages for object "TBL_ACTION_LOGS". DBCC results for 'sys.queue_messages_1977058079'. There are 0 rows in 0 pages for object "sys.queue_messages_1977058079". DBCC results for 'sys.queue_messages_2009058193'. There are 0 rows in 0 pages for object "sys.queue_messages_2009058193". DBCC results for 'TBL_CODES'. There are 98 rows in 1 pages for object "TBL_CODES". DBCC results for 'TBL_MERCHANT_BALANCE_LOGS'. There are 183498 rows in 3178 pages for object "TBL_MERCHANT_BALANCE_LOGS". DBCC results for 'TBL_MERCHANT_CHANNEL_TEMPLATE'. There are 397 rows in 2 pages for object "TBL_MERCHANT_CHANNEL_TEMPLATE". DBCC results for 'sys.queue_messages_2041058307'. There are 0 rows in 0 pages for object "sys.queue_messages_2041058307". DBCC results for 'TBL_VNPTEPAY'. There are 0 rows in 0 pages for object "TBL_VNPTEPAY". DBCC results for 'TBL_PAGE_GROUPS'. There are 10 rows in 1 pages for object "TBL_PAGE_GROUPS". DBCC results for 'TBL_PAGE_GROUP_PAGE'. There are 513 rows in 2 pages for object "TBL_PAGE_GROUP_PAGE". DBCC results for 'TBL_ACCOUNT_CHANNEL_TEMPLATE'. There are 0 rows in 0 pages for object "TBL_ACCOUNT_CHANNEL_TEMPLATE". DBCC results for 'TBL_PAGES'. There are 148 rows in 3 pages for object "TBL_PAGES". CHECKDB found 0 allocation errors and 2 consistency errors in database 'itopup_dev'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (itopup_dev). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • Self-powered USB hub and power supply adapter ampere capacity

    - by galacticninja
    I am looking for a power supply adapter for my USB Hub so it can support at least 2 bus-powered external hard drives. The hub's rating is 5 volts, 2 amperes. I would like to know if it would be OK to buy a power supply adapter rated at less than 2A. I've been looking for power supply adapters and the ones that do support 2A are more expensive (more expensive than the USB hub itself) compared to those that support less than 2A. Will power supply adapter that supports less than 2A (~1-1.5A) work fine to support two external hard drives? The external hard drives are both bus-powered Western Digital My Passport Essentials 250 GB. The OS is Windows XP SP3.

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >