Search Results

Search found 486 results on 20 pages for 'lee atkinson'.

Page 2/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Unable to specify abstract classes in TPH hierarchy in Entity Framework 4

    - by Lee Atkinson
    Hi I have a TPH heirachy along the lines of: A-B-C-D A-B-C-E A-F-G-H A-F-G-I I have A as Abstract, and all the other classes are concrete with a single discriminator column. This works fine, but I want C and G to be abstract also. If I do that, and remove their discriminators from the mapping, I get error 3034 'Two entities with different keys are mapped to the same row'. I cannot see how this statement can be correct, so I assume it's a bug in some way. Is it possible to do the above? Lee

    Read the article

  • What I Expect From Myself This Year

    - by Lee Brandt
    I am making it a point not to call them resolutions, because the word has become an institution and is beginning to have no meaning. That's why I end up not keeping my resolutions, I think. So in the spirit of holding myself to my own commitments, I will make a plan and some realistic goals. 1.) Lose weight. Everyone has this on their list, but I am going to be conservative and specific. I currently weigh 393lbs. (yeah, I know). So I want to plan to lose 10lbs per month, that's 1lb. every three days, that shouldn't be difficult if I stick to my diet and exercise plan. - How do I do this?     - Diet: vegetarian. Since I already know I have high blood pressure and borderline high cholesterol, a meat-free diet is in order. I was vegan for a little over 2 years in 2006-2008, I think I can handle vegetarian.     - Exercise: at least 3 times (preferably every day) a week for 30 minutes. It has to be something that gets my heart rate up, or burns in my muscles. I can walk for cardio to start and mild calisthenics (girly push-ups, crunches, etc.).         - Move: I spend all my time behind the computer. I have recently started to use a slight variation of the Pomodoro Technique (my Pomodoros are 50 minutes instead of 25). During my 10 minutes every hour to answer emails, chats, etc., I will take a few minutes to stretch. 2.) Get my wife pregnant. We've been talking about it for years. Now that she is done with graduate school and I have a great job, now's the time. We'll be the oldest parents in the PTA most likely, but I don't care. 3.) Blog More. Another favorite among bloggers, but I do have about six drafts for blog posts started. The topics are there all I need to do is flesh out the post. This can be the first hour of any computer time I have after work. As soon as I am done exercising, shower and post. 4.) Speak less. Most people want to speak more. I want to concentrate on the places that I enjoy and that can really use the speakers (like local code camps), rather than trying to be some national speaker. I love speaking at conferences, but I need to spend some more time at home if we're going to get pregnant. 5.) Read more. I got a Kindle for Christmas and I am loving it so far. I have almost finished Treasure Island, and am getting ready to pick my next book. I will probably read a lot of classics for 2 reasons: (1) they teach deep lessons and (2) most are free for the Kindle. 6.) Find my religion. I was raised Southern Baptist, but I want to find my own way. I've been wanting to go to the local Unitarian Church, so I will make a point to go before the end of March. I also want to add a few religious books to my reading list. My boss bought me a copy of Lee Strobel's The Case for Christ: A Journalist's Personal Investigation of the Evidence for Jesus , and I have a copy of Bruce Feiler's Abraham: A Journey to the Heart of Three Faiths (P.S.) . I will start there. Seems like a lot now that I spell it out like this. But these are only starters. I am forty years old. I cannot keep living like I am twenty anymore. So here we go, 2011.

    Read the article

  • What's slowing for loops/assignment vs. C?

    - by Lee
    I have a collection of PHP scripts that are extremely CPU intensive, juggling millions of calculations across hundreds of simultaneous users. I'm trying to find a way to speed up the internals of PHP variable assignment, and looping sequences vs C. Although PHP is obviously loosely typed, is there any way/extension to specifically assign type (assign, not cast, which seems even more expensive) in a C-style fashion? Here's what I mean. This is some dummy code in C: #include <stdio.h> int main() { unsigned long add=0; for(unsigned long x=0;x<100000000;x++) { add = x*59328409238; } printf("x is %ld\n",add); } Pretty self-explanatory -- it loops 100 million times, multiples each iteration by an arbitrary number of some 59 billion, assigns it to a variable and prints it out. On my Macbook, compiling it and running it produced: lees-macbook-pro:Desktop lee$ time ./test2 x is 5932840864471590762 real 0m0.266s user 0m0.253s sys 0m0.002s Pretty darn fast! A similar script in PHP 5.3 CLI... <?php for($i=0;$i<100000000;$i++){ $a=$i*59328409238; } echo $a."\n"; ?> ... produced: lees-macbook-pro:Desktop lee$ time /Applications/XAMPP/xamppfiles/bin/php test3.php 5.93284086447E+18 real 0m22.837s user 0m22.110s sys 0m0.078s Over 22 seconds vs 0.2! I realize PHP is doing a heck of a lot more behind the scenes than this simple C program - but is there any way to make the PHP internals to behave more 'natively' on primitive types and loops?

    Read the article

  • Why would cat6 connectors not work with cat5e patch cable?

    - by Lee Tickett
    I had a naff batch of cat5 connectors (the latching mechanism didn't work) so decided to order in some cat6 connectors in preparation for the inevitable upgrade. My existing reel of for making patch cables is cat5e utp stranded. I made up a few cables and tested them- none of them worked. I recrimped and still nothing. When i check them with a multi-meter not all pins are connected. This reel has always worked with the previous cat5 connectors so I tested the cat6 connectors on a reel of solid cat5e cable and they work fine. Any ideas what I might be doing wrong? Or what might be at fault? (cable/connectors) and how I can diagnose? Thanks Lee

    Read the article

  • Named pipe blocking with user nobody

    - by dnagirl
    I have 2 short scripts. The first, an awk script, processes a large file and prints to a named pipe 'myfifo.dat'. The second, a Perl script, runs a LOAD DATA LOCAL INFILE 'myfifo.dat'... command. Both of these scripts work when run locally like so: lee.awk big.file & lee.pl However, when I call these scripts from a PHP webpage, the named pipe blocks: $awk="/path/to/lee.awk {$_FILES['uploadfile']['tmp_name']} &"; $sql="/path/to/lee.pl"; if(!exec($awk,$return,$err)) throw new ZException(print_r($err,true)); //blocks here if(!exec($sql,$return,$err)) throw new ZException(print_r($err,true)); If I modify the awk and Perl scripts so that they write and read to a normal file, everything works fine from PHP. The permissions on the fifo and the normal file are 666 (for testing purposes). These operations run much more quickly through a named pipe, so I'd prefer to use one. Any ideas how to unblock it? ps. In case you're wondering why I'm going to all this aggravation, see this SO question.

    Read the article

  • How to insert a word into a string in Perl

    - by Nano HE
    #!C:\Perl\bin\perl.exe use strict; use warnings; use Data::Dumper; my $fh = \*DATA; while(my $line = <$fh>) { $line =~ s/ ^/male /x ; print $line ; } __DATA__ 1 0104 Mike Lee 2:01:48 output male 1 0104 Mike Lee 2:01:48 Then I tried to insert male after the racenumber(0104), I replaced the code with style. $line =~ s/ ^\d+\s+\d+\s+ /male /x ; # but failed Acturally I want the output. thank you. 1 0104 male Mike Lee 2:01:48

    Read the article

  • How do I insert a word into a string in Perl?

    - by Nano HE
    #!C:\Perl\bin\perl.exe use strict; use warnings; use Data::Dumper; my $fh = \*DATA; while(my $line = <$fh>) { $line =~ s/ ^/male /x ; print $line ; } __DATA__ 1 0104 Mike Lee 2:01:48 output male 1 0104 Mike Lee 2:01:48 Then I tried to insert male after the racenumber(0104), I replaced the code with style. $line =~ s/ ^\d+\s+\d+\s+ /male /x ; # but failed Acturally I want the output. thank you. 1 0104 male Mike Lee 2:01:48

    Read the article

  • Mysql : get data from 2 tables (need help)

    - by quangtruong1985
    Assume that I have 2 tables : members and orders (Mysql) Members : id | name 1 | Lee 2 | brad Orders : id | member_id | status (1: paid, 2: unpaid) | total 1 | 1 | 1 | 1000000 2 | 1 | 1 | 1500000 3 | 1 | 2 | 1300000 4 | 2 | 1 | 3000000 5 | 2 | 2 | 3500000 6 | 2 | 2 | 3300000 I have a sql query : SELECT m.name, COUNT(o.id) as number_of_order, SUM(o.total) as total2 FROM orders o LEFT JOIN members m ON o.member_id=m.id GROUP BY o.member_id which give me this: name | number_of_order | total2 Lee | 3 | 3800000 brad | 3 | 9800000 All that I want is something like this : name | number_of_order | total2 | Paid Unpaid | Paid Unpaid ------------------------------------------------ Lee | 3 | 3800000 | 2 1 | 2500000 1300000 ------------------------------------------------ brad | 3 | 9800000 | 1 2 | 3000000 6800000 ------------------------------------------------ How to make a query that can give me that result? Thanks for your time!

    Read the article

  • Windows 7 - can't uninstall application as admin

    - by James Atkinson
    I'm trying to uninstall Microsoft Virtual PC 2007 from my Windows 7 machine. Logged in as administrator, when I attempt to uninstall I get a messagebox stating "The system administrator has set policies to prevent this installation." Excuse me? No I haven't. Immediately following, another messagebox displays: "You do not have sufficient access to uninstall Microsoft Virtual PC 2007. Please contact your system administrator." I didn't do anything unusual when installing or setting up the software. Just normal home use computer. Any ideas on how to remove this software?

    Read the article

  • Windows Upgrade vs Full Install

    - by James Atkinson
    I'm in the process of purchasing a Netbook for use while traveling. The included OS is XP, however, I would like to upgrade(?) to Windows 7. My question: Does a Windows Upgrade have the same physical footprint and performance as a full install? Does an upgrade leave behind non used files/resources that were originally included in XP? If so, are there ways to reduce this? I'm trying to reduce as much OS bloat as possible. Please let me know if my question is unclear. Thanks. Related to http://superuser.com/questions/60646/is-a-clean-install-really-better-than-an-upgrade however, this doesn't address the "leftovers" question.

    Read the article

  • public key always asking for password and keyphrase

    - by Andrew Atkinson
    I am trying to SSH from a NAS to a webserver using a public key. NAS user is 'root' and webserver user is 'backup' I have all permissions set correctly and when I debug the SSH connection I get: (last little bit of the debug) debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering DSA public key: /root/.ssh/id_dsa.pub debug1: Server accepts key: pkalg ssh-dss blen 433 debug1: key_parse_private_pem: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/root/.ssh/id_dsa.pub': I am using the command: ssh -v -i /root/.ssh/id_dsa.pub [email protected] The fact that it is asking for a passphrase is a good sign surely, but I do not want it to prompt for this or a password (which comes afterwards if I press 'return' on the passphrase)

    Read the article

  • McAfee Virus Scan and Oracle RAC

    - by Lee Gathercole
    Hi, We're experiencing a strange problem with Oracle RAC and McAfee anti-virus. As part of the installation of the Oracle RAC we disable anti virus as directed. We have had our RAC running fine, but when we came to re-enable the AV and reboot we got the BSOD. Abnormal Program Termination (BugCheck, STOP: 0x00000035 (0x8E984678, 0x00000000, 0x00000000, 0x00000000 NO_MORE_IRP_STACK_LOCATIONS Following the standard process of raising this problem with Microsoft they identify the problem and also a fix. Microsoft talk about too many file filter drivers being present and pushing the DFS upper limit beyond the default size. Upping this value, as per msdn, has no impact. We're able to recover from this BSOD by disabling AV. We don't have the problem if we run the AV service manually whilst the system is up. However, if we make the service automatic we fail to boot. Tech Details 2 Node Oracle 10g Cluster 2 * Windows 2003 SP2, 16GB RAM, Quad Core 3ghz Processor SAN attached storage McAfee VirusScan Enterprise 8.5.0i, Scan Engine (5300.2777), DAT Version (5536.0000) Thanks Lee

    Read the article

  • Unit Test to Run .sql script to SQL Create Database

    - by Lee Englestone
    Can anyone point me in the direction of how I could get a NUnit test to run a .sql file to Create / Setup a database. I know about the the TestFixtureSetUp and TestFixtureTearDown attributes / methods in NUnit. So I KNOW how to call methods before and after all or each unit tests. I'm just unsure of how to load and execute the contents of a .sql file agains a SQL Server 2005 database programatically. Any examples? This is part of our TDD / CI. We are wanting to create the database before and tear down the database after executing unit tests. Cheers, -- Lee

    Read the article

  • VB.net 'cross-referencing' tables.

    - by Lee
    Hello, I have a DataGridView that is being filled with data from a table. Inside this table is a column called 'group' that has the ID of an individual group in another table. What I would like to do, is when the DataGridView is filled, instead of showing the ID contained in 'group', I'd like it to display the name of the group. Is there some type of VB.net 'magic' that can do this, or do I need to cross-reference the data myself? Here is a breakdown of what the 2 tables look like: table1 id group (this holds the value of column id in table 2) weight last_update table2 id description (this is what I would like to be displayed in the DGV.) take care, lee BTW - I am using Visual Studios Express.

    Read the article

  • Continuous Integration with Oracle Products

    - by Lee Gathercole
    Hi, I'm currently working on a Datawarehouse project using an Oracle Database, Oracle Data Integrator, Oracle Warehouse Builder and some Jython thrown in for good measure. All of which is held within TFS. My background is .net and prior to this project was seeing a lot of promise in CI. I'm not suggesting that the testing element of CI is feasible in this instance, but I would like to implement a stable deployment strategy. What I'm trying to understand is whether or not I can build some NANT scripts that will allow me to deploy ODI\OWB\Oracle DB code to any given environment at any point. Has anyone tried this before? Are there more appropriate tools out there that lends themselves better to this sort of toolset? Am I just a crazy horse to be evening contemplating this? Any view would be greatly appreciated. Thanks Lee

    Read the article

  • Where Are the Release Versions of ASP.Net AJAX 4.0 Templating Files?

    - by Lee Richardson
    I'm trying to get the production version of ASP.Net AJAX 4.0 Templating working and can't find the JavaScript files. With the beta version I needed to reference MicrosoftAjaxTemplates.js, MicrosoftAjaxAdoNet.js, and MicrosoftAjaxDataContext.js. I can get everything to work with the beta CDN versions (e.g. http://ajax.microsoft.com/ajax/beta/0911/MicrosoftAjaxTemplates.js). But for the life of me I can't find 1. The release CDN versions of these files or 2. Where to download the whole Release ASP.Net AJAX 4.0 JavaScript package. The files certainly are not listed on the ASP.Net AJAX 4.0 CDN at http://www.asp.net/ajaxlibrary/CDNAjax4.ashx. Maybe the files have been renamed? Theoretically this should be a rediculously easy question and I'm a little embarrased to even ask it on StackOverflow, but I've had no luck finding an answer on my own. Any help would be much appreciated. Thanks, - Lee

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • Is Windows 7 Home Premium edition sufficient for software development?

    - by Lee Englestone
    Is the Windows 7 Home Premium sufficient for software development? Development would be in Visual Studio 2010. I'm on a budget so would rather purchase 'Home Premium' rather than 'Professional' or 'Ultimate'. The Microsoft site says there is next to nothing functionality wise between them that developers would miss. Can anyone confirm or deny? BTW. Does it come with a version of IIS? I realize that this is not a technical question. But it is important to me and I'm sure other developers wonder the same thing. Cheers, -- Lee

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >