Search Results

Search found 1555 results on 63 pages for 'scott willman'.

Page 53/63 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • Is LINQ to SQL deprecated?

    - by Mayo
    Back in late 2008 there was alot of debate about the future of LINQ to SQL. Many suggested that Microsoft's investments in the Entity Framework in .NET 4.0 were a sign that LINQ to SQL had no future. I figured I'd wait before making my own decision since folks were not in agreement. Fast-forward 18 months and I've got vendors providing solutions that rely on LINQ to SQL and I have personally given it a try and really enjoyed working with it. I figured it was here to stay. But I'm reading a new book (C# 4.0 How-To by Ben Watson) and in chapter 21 (LINQ), he suggests that it "has been more or less deprecated by Microsoft" and suggests using LINQ to Entity Framework. My question to you is whether or not LINQ to SQL is officially deprecated and/or if authoritative entities (Microsoft, Scott Gu, etc.) officially suggest using LINQ to Entities instead of LINQ to SQL.

    Read the article

  • Custom app_offline.htm file during publish

    - by Charlino
    When I publish my ASP.NET MVC application it generates a app_offline.htm file to take the site offline while it updates the website and then deletes the file once the publish is successful. This is cool and I really like the idea, but I want to create my own custom app_offline.htm file that the publish action is aware of and put it somewhere where it doesn't effect my development site - i.e. it doesn't sit in the root of my development site rendering it offline all the time. TIA, Charles EDIT: From the comments on Scott Gu's post about app_offline.htm, it seems that customization of the app_offline.htm file wasn't possible with VS 2005 - has this changed with VS 2008 and now VS 2010?

    Read the article

  • Config transformations and “TransformXml task failed” error message

    - by Troy Hunt
    I’ve just enabled config transformations on a .NET 3.5 project in VS2010 RC after watching Scott Hanselman’s video on web deployment. Unfortunately every time I go to publish I now get the following error: The "TransformXml" task failed unexpectedly. System.UriFormatException: Invalid URI: The URI is empty. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at Microsoft.Web.Publishing.Tasks.TransformXml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) If I take a brand new VS2010 web application which already has the config transformations by default I don’t have a problem so I suspect my issue is project related. Has anyone come across this before or have any ideas on a fix?

    Read the article

  • Best Non-Microsoft/.NET Development Blogs and Podcasts

    - by Mark Lubin
    I really don't want to rile anyone's feathers and I really enjoy blogs like Joel, Coding Horror, Scott Hanselman, etc. However, what are some good blogs/podcasts that focus on other platforms besides Microsoft based ones or maybe even no particular platform at? In particular I am interested in areas like Python, Ruby, Java, C, but any area besides Microsoft fits the bill. There may be other similar questions to this but I think this one is unique since we might get some answers we wouldn't see otherwise, that is, everyone knows Coding Horror is great. Please try to refrain from religious debates; the content of the blog is more important than the platform they are talking about, I just want to get an idea of what is out there and worth the time.

    Read the article

  • Urlrewriting.net pages are not causing postbacks

    - by Nick
    I'm using webforms with UrlRewriting.Net to rewrite pages, e.g. http://www.example.com/stuff.aspx?c=30 becomes http://www.example.com/stuff/30-this-stuff.aspx. It works in so far as the correct content is loading; however, none of the postbacks are working (mostly buttons on the page). If I set up a breakpoint on Page_Load, I see that IsPostBack is always false. Any ideas on how to fix this? Right now I'm just on Visual Studio 2008. EDIT: I have since switched to UrlRewriter.Net, which worked after a few tweaks (see Scott Gu's article). Besides here, I have posted my original problem to the developer's forum: if I ever get an answer, I'll post it here (unless else posts it here first).

    Read the article

  • Using GIT Smart HTTP via IIS

    - by Andrew Matthews
    I recently read Scott Chacon's post "Smart HTTP Transport", and I was hoping that it might have become possible via IIS (windows 7) since that post was written. I haven't been able to find anything showing how it can be done, and Apache is not an option in my IIS 7 based environment. So, I'm at a loss (git daemon was foiled for me by a combination of AVG anti-virus and AD). I want to provide LDAP authenticated read/write access for selected users. So this question seems not to be relevant. Do you know of a way to provide access to GIT via IIS?

    Read the article

  • SQL Server Gender Swap Query

    - by Sanju
    I have a Table having the following column: ID, Name, Designation, Salary, Contact, Address, Gender Accidentally for all male Gender i have entered 'Female' and similarly for all Female gender i have entered 'Male'. For Exmaple 0023 Scott Developer 15000 4569865 Cliff_Eddington,USA Female I the above line There should be Male instead of female. And there are all Male whose gender has been updated as Female. Same case for all Female Gender has been updated as Male. Is there any Single query through which i can change all the Rows whose Gender is Male Change it to Female and All the Rows whose Gender is Female Change it to Male.

    Read the article

  • Best Practices - Data Annotations vs OnChanging in Entity Framework 4

    - by jptacek
    I was wondering what the general recommendation is for Entity Framework in terms of data validation. I am relatively new to EF, but it appears there are two main approaches to data validation. The first is to create a partial class for the model, and then perform data validations and update a rule violation collection of some sort. This is outlined at http://msdn.microsoft.com/en-us/library/cc716747.aspx The other is to use data annotations and then have the annotations perform data validation. Scott Guthrie explains this on his blog at http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx. I was wondering what the benefits are of one over the other. It seems the data annotations would be the preferred mechanism, especially as you move to RIA Services, but I want to ensure I am not missing something. Of course, nothing precludes using both of them together. Thanks John

    Read the article

  • What initial modelling/design activities on Agile Projects do you do??

    - by dalton
    When developing an application using agile techniques, what if any initial modelling/architecture activities do you do, and how do you capture that knowledge?? I'm not after a bullet list about XP, Scrum, Crystal, DSDM..etc as I'm familiar with the methodologies. But what do you do above and beyond the guidance given by these. I find I work best by thinking the system through first, but also like the benefits of timeboxing, story cards, pairing, tdd. The closest thing I've seen so far is Scott Ambler's Initial Architecture Modelling, but was wondering what alternatives are used out there?

    Read the article

  • ASP.NET MVC Validation - localisation of the error string

    - by gmang
    I followed the techique ASP.NET MVC 2: Model Validation from Scott Gu. (http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx). However I am building a localised web site. How can I localized the error string? I tried the following by replacing the following: [RegularExpression(@"\d{4}",ErrorMessage="Must be a 4 digit year")] public Nullable YearOfWork { get; set; } With the following: [RegularExpression(@"\d{4}",ErrorMessage=Resources.SharedStrings.search_error1)] public Nullable YearOfWork { get; set; } but I get a complilation error: An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type Please help!

    Read the article

  • Where can I find the Visual Studio 2010 RTM release notes?

    - by Lernkurve
    Question Where can I find a list of changes introduced to Visual Studio 2010 Ultimate RTM that were added since Visual Studio 2010 Ultimate RC? In fact, I'm only interested in changes related to MS Test Manager 2010 and Coded UI Tests. Where I have looked so far I have searched the Internet, looked for a readme.txt in the installation folder, looked into the Visual Studio help (F1) and browsed the "What's new in Visual Studio 2010" section on MSDN. No luck. Found Scott Guthrie's blog post Visual Studio 2010 and .NET 4 Released, but that's not exactly what I am looking for. It's not a changelog since VS2010RC. I suppose there is no such file because they made too many changes to document and hand out to end users. But if there was, I'd be glad if someone could point me to it. Thanks.

    Read the article

  • WCF Self-hosted service, client clean-up on service stop

    - by Sentax
    Hi everyone, I'm curious to know how I would go about setting up my service to stop cleanly on the server the service will be installed on. For example when I have many clients connecting and doing operations every minute and I want to shut-down the service for maintenance, how can I do this in the "OnStop" event of the service to then let the main service host to deny any new client connections and let the current connections finish before it actually shuts down its services to the client, this will ensure data isn't corrupted on the server as the server shuts down. Right now I'm not setup as a singleton because I need scalability in the service. So I would have to somehow get my service host to do this independently of knowing how many instances are created of the service class. I hope I explain myself good enough, if not, let me know, I'll try to explain better. Thanks, Scott

    Read the article

  • Authentication on odata service

    - by Toad
    I want to add some authentication to my odata service. Depending on the user calling i want to: filter rows and/or remove columns. I read in scott hanselmans fine blogpost on odata ( http://www.hanselman.com/blog/CreatingAnODataAPIForStackOverflowIncludingXMLAndJSONIn30Minutes.aspx )that it is possible to intercept the incoming queries. If this works i could add some extra filtering. How would this intercepting and altering queries work exactly? I can not find any examples of where and how to do this. (i'm using entitie framework and wcf dataservices (just like scotts example blog)

    Read the article

  • Simple Linq Dynamic Query question

    - by Dr. Zim
    In Linq Dynamic Query, Scott Guthrie shows an example Linq query: var query = db.Customers. Where("City == @0 and Orders.Count >= @1", "London", 10). OrderBy("CompanyName"). Select("new( CompanyName as Name, Phone)"); Notice the projection new( CompanyName as Name, Phone). If I have a class like this: public class CompanyContact { public string Name {get;set;} public string Phone {get;set;} } How could I essentially "cast" his result using the CompanyContact data type without doing a foreach on each record and dumping it in to a different data structure? To my knowledge the only .Select available is the Dymanic Query version which only takes a string and parameter list.

    Read the article

  • No_data_found exception is progating in outer block also?

    - by Vineet
    In my code i am entering the salary which is not available in employees table and then again inserting duplicate employee_id in primary key column of employee table in exception block where i am handling no data found exception but i do not why 'No data found exception in the end also? OUTPUT coming: Enter some other sal ORA-01400: cannot insert NULL into ("SCOTT"."EMPLOYEES"."LAST_NAME") ORA-01403: no data found --This should not come according to logic This is the code DECLARE v_sal number:=&p_sal; v_num number; BEGIN BEGIN select salary INTO v_num from employees where salary=v_sal; EXCEPTION WHEN no_data_found THEN DBMS_OUTPUT.PUT_LINE('Enter some other sal'); INSERT INTO employees (employee_id)values(100) ; END; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm); END;

    Read the article

  • Lock Free Queue -- Single Producer, Multiple Consumers

    - by Shirish
    Hello, I am looking for a method to implement lock-free queue data structure that supports single producer, and multiple consumers. I have looked at the classic method by Maged Michael and Michael Scott (1996) but their version uses linked lists. I would like an implementation that makes use of bounded circular buffer. Something that uses atomic variables? On a side note, I am not sure why these classic methods are designed for linked lists that require a lot of dynamic memory management. In a multi-threaded program, all memory management routines are serialized. Aren't we defeating the benefits of lock-free methods by using them in conjunction with dynamic data structures? I am trying to code this in C/C++ using pthread library on a Intel 64-bit architecture. Thank you, Shirish

    Read the article

  • Using jQuery for client side validation in MVC2 RTM

    - by tigermain
    Scott Gu's tutorial on Model validation gets us all set up with the MS client side validation using the following scripts: <script src="../../Scripts/jquery.validate.min.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> However I've seen various posts allowing us to utilise jQuery instead with the following code: <script src="https://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="https://ajax.microsoft.com/ajax/jQuery.Validate/1.6/jQuery.Validate.min.js" type="text/javascript"></script> <script src="<%= Url.Content("~/scripts/MicrosoftMvcJQueryValidation.js") %>" type="text/javascript"></script> However MicrosoftMvcJQueryValidation.js does not ship with the solution and from what I read it should be part of the Futures pack which is no longer available on CodePlex. I managed to find a version alongside jQuery 1.3.2 but it does not work. What is the forward going solution!?

    Read the article

  • No_data_found exception is propagating to outer block also?

    - by Vineet
    In my code i am entering the salary which is not available in employees table and then again inserting duplicate employee_id in primary key column of employee table in exception block where i am handling no data found exception but i do not why No data found exception in the end also? OUTPUT coming: Enter some other sal ORA-01400: cannot insert NULL into ("SCOTT"."EMPLOYEES"."LAST_NAME") ORA-01403: no data found --This should not come according to logic This is the code: DECLARE v_sal number:=&p_sal; v_num number; BEGIN BEGIN select salary INTO v_num from employees where salary=v_sal; EXCEPTION WHEN no_data_found THEN DBMS_OUTPUT.PUT_LINE('Enter some other sal'); INSERT INTO employees (employee_id)values(100) ; END; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm); END;

    Read the article

  • Trying to drop all tables from my schema with no rows?

    - by Vineet
    I am trying to drop all tables in schema with no rows,but when i am executing this code i am getting an error THis is the code: create or replace procedure tester IS v_count NUMBER; CURSOR emp_cur IS select table_name from user_tables; BEGIN FOR emp_rec_cur IN emp_cur LOOP EXECUTE IMMEDIATE 'select count(*) from '|| emp_rec_cur.table_name INTO v_count ; IF v_count =0 THEN EXECUTE IMMEDIATE 'DROP TABLE '|| emp_rec_cur.table_name; END IF; END LOOP; END tester; ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "identifier": expecting one of: "badfile, byteordermark, characterset, data, delimited, discardfile, exit, fields, fixed, load, logfile, nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string, skip, variable" KUP-01008: the bad identifier was: DELIMETED KUP-01007: at line 1 column 9 ORA-06512: at "SYS.ORACLE_LOADER", line 14 ORA-06512: at line 1 ORA-06512: at "SCOTT.TESTER", line 9 ORA-06512: at line 1

    Read the article

  • Domain Driven Design And The Entity Framework

    - by Hossein
    hi, I'm new to DDD and i want to use entity framework v4.0 (shipped with .net 4.0) in my new project. since i have few time to learn DDD and entity framework, which books are good for me to read first?! i'm going to first read Domain-Driven Design Tackling Complexity in the Heart of Software by Eric Evan and next Pro Entity Framework 4.0 (Apress Scott Klein)... the problem here is Eric Evan's book is too abstract,I want to know if the Pro Entity framework 4.0 is complete enough i just skip the first book! in the end recommend me some good books in DDD and Entity Framework Thanks

    Read the article

  • IE7 li ul bug on dropdown menu

    - by Berns
    hoping one of you guys can help me please. I have a basic list menu with two dropdowns. This all works fine on all browsers except IE6 and IE7. Please take a look at my markup. <nav> <ul id="topNav" ><li id="topNavFirst"><a href="../about/about.php" id="aboutNav">About Us</a></li ><li id="topNavSecond"><a href="../people/our-people.php" id="peopleNav">Our People</a ><ul id="subList1"><li><a href="../people/mike-hadfield.php">Mike Hadfield</a></li ><li><a href="../people/karen-sampson.php">Karen Sampson</a></li ><li><a href="../people/milhana-farook.php">Milhana Farook</a></li ><li><a href="../people/kim-crook.php">Kim Crook</a></li ><li><a href="../people/amanda-lynch.php">Amanda Lynch</a></li ><li><a href="../people/gideon-scott.php">Gideon Scott</a></li ><li><a href="../people/paul-fuller.php">Paul Fuller</a></li ><li><a href="../people/peter-chaplain.php">Peter Chaplain</a></li ><li><a href="../people/laura-hutley.php">Laura Hutley</a></li ></ul ></li ><li id="topNavThird"><a href="../services/our-services.php" id="servicesNav">Our Services</a ><ul id="subList2"><li><a href="../services/company-and-commercial.php">Company &amp; Commercial</a></li ><li><a href="../services/employment.php">Employment</a></li ><li><a href="../services/civil-litigation.php">Civil Litigation</a></li ><li><a href="../services/debt-recovery.php">Debt Recovery</a></li ><li><a href="../services/conveyancing.php">Conveyancing</a></li ><li><a href="../services/commercial-property.php">Commerical Property</a></li ><li><a href="../services/wills-and-probate.php">Wills &amp; Probate</a></li ><li><a href="../services/family.php">Matrimonial &amp; Family</a></li ></ul ></li ><li><a href="../news/news.php" id="newsNav">News</a></li ><li><a href="../careers/careers.php" id="careersNav">Careers</a></li ><li><a href="../contact/contact.php" id="contactNav">Contact</a></li ></ul><!-- /topNav --> </nav>? and the css a {text-decoration:none;} #topNav { float:right; height:30px; margin:0; font-size:12px; } #topNav li { display:inline; float:left; list-style:none; color:#666; border-left: 1px solid #666; padding: 0 3px 0 3px; position:relative; } #topNav ul a { white-space:nowrap; } #topNav li a:hover { border-bottom:2px solid #369; } #topNavSecond a:hover { border-bottom:2px solid transparent !important; } #topNavFirst { border-left: 1px solid transparent !important; } /*****OUR-PEOPLE DROPDOWN*****/ #topNav ul{ background:#fff; border:1px solid #666; border-top:0px solid transparent; border-bottom:2px solid #666; list-style:none; position:absolute; left:-9999px; width:100px; text-align:left; padding:5px 0 5px 0px; margin:0 0 0 -4px; z-index:10; -webkit-box-shadow: 1px 1px 1px #666; -moz-box-shadow: 1px 1px 1px #666; box-shadow: 1px 1px 1px #666; vertical-align: bottom; } #topNav ul li{ display:block; border-left:0px; margin-bottom: 0px; padding:0; vertical-align: bottom; } #topNav ul a{ padding:0 0 0 5px; } #topNav li:hover ul{ left:auto; } #topNav li:hover a { color:#369; } #topNav li:hover ul a{ text-decoration:none; color:#666; } #topNav li:hover ul li a:hover{ color:#fff;; width:100%; border-bottom:0px solid transparent !important; } #topNav ul li:hover { background:#369; display: block; } #topNav ul li a { display: block; padding:0 0 0 4px; } /************/ /*****OUR-SERVICES DROPDOWN*****/ #topNavThird a:hover { border-bottom:2px solid transparent !important; } #topNavThird ul{ /*background:#fff url(images/service-ul-bg.png) no-repeat;*/ width:135px !important; /*margin-left:120px !important;*/ }? here it is working perfectly http://jsfiddle.net/BcWd9/ here is a screen shot of how it looks in IE7. hadfield.andymcnallydesign.co.uk/images/ie7-error.jpg as you can see the ul is appearing to the right of the li and not the left and it is overlaying the top list. I've tried removing white space, but no luck. Any ideas? If one of you can help it would be much appreciated.

    Read the article

  • Throwing Exception in CTOR and Smart Pointers

    - by David Relihan
    Is it OK to have the following code in my constructor to load an XML document into a member variable - throwing to caller if there are any problems: MSXML2::IXMLDOMDocumentPtr m_docPtr; //member Configuration() { try { HRESULT hr = m_docPtr.CreateInstance(__uuidof(MSXML2::DOMDocument40)); if ( SUCCEEDED(hr)) { m_docPtr->loadXML(CreateXML()); } else { //throw exception to caller } } catch(...) { //throw exception to caller } } Based on Scott Myers RAII implementations in More Effective C++ I believe I am alright in just allowing exceptions to be thrown from CTOR as I am using a smart pointer(IXMLDOMDocumentPtr). Let me know what you think....

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >