Search Results

Search found 36632 results on 1466 pages for 'sql tool'.

Page 427/1466 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Only a few places left for the SQL Social evening on 16th March

    - by simonsabin
    We've got over 50 people registered for the SQLSocial event on 16th March with Itzik Ben-Gan, Greg Low, Davide Mauri and Bill Vaughn I need to finalise numbers on early next week so if you want to come along please register asap, otherwise I can't promise that we'll have space for you. To register use he form on herehttp://sqlsocial.com/events.aspx. I look forward to hearing from you.

    Read the article

  • The PASS Elections Review Committee Needs Your Feedback

    - by andyleonard
    Introduction PASS has had an ERC (Elections Review Committee) forum running for a few months now. There's been surprisingly little feedback, though lots of reads. Here's what it looks like tonight: That's 1,662 views and 37 replies by my count. Not very many replies... Jump In! Now's the time to let PASS know what you think about the current elections process. The ERC members are good people who are trying to make things better. If you have something to add - as simple as "love it!" or "hate it!"...(read more)

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Is there such a tool for testing

    - by kjack
    Say one has a structural codebase where lots of the code is in GUI control events and has no tests. So such code, to my knowledge is not suitable for unit testing Is there a tool that can test each routine automatically replacing references to code elements external to the routine (be they functions, variables or GUI controls) with appropriate mocks(?) and record the results in a database for later comparison after code changes? So the testing program would have the duty of writing, running and reporting tests with minimal intervention?

    Read the article

  • Learn Who Started that Trace with the Default Trace

    - by Jonathan Kehayias
    This is not Extended Event related but it came from a question on Twitter about how to tell who and from what machine a server side trace was created, and there is no way to explain this in 140 characters so here’s a blog post.  This information is tracked in the Default Trace and can be found by querying for EventClass 175 which is the Audit Server Alter Trace Event trace_event_id from sys.trace_events. select trace_event_id , name from sys . trace_events where name like '%trace%' To query...(read more)

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • Subsonic 3, SimpleRepository, SQL Server: How to find rows with a null field?

    - by desautelsj
    How ca I use Subsonic's Find<T> method to search for rows with a field containing the "null" value. For the sake of the discussion, let's assume I have a c# class called "Visit" which contains a nullable DateTime field called "SynchronizedOn" and also let's assume that the Subsonic migration has created the corresponding "Visits" table and the "SynchronizedOn" field. If I was to write the SQL query myself, I would write something like: SELECT * FROM Visits WHERE SynchronizedOn IS NULL When I use the following code: var visits = myRepository.Find<Visit>(x => x.SynchronizedOn == null); Subsonic turns it into the following SQL query: SELECT * FROM Visits WHERE SynchronizedOn == null which never returns any rows. I tried the following code but it throws an error: visits = repository.Find<Visit>(x => x.SynchronizedOn.HasValue); I was able to use the following syntax: var query = from v in repository.All<Visit>() where v.SynchronizedOn == null orderby v.CreatedOn select v; visits = query.ToList<Visit>(); but it's not as nice an short as using the Find<T> method. Anyone knows how I can specify the "SynchronizedOn IS NULL" condition in the Find<T> method?

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • Looking Back at PASS Summit 2013 - Location

    - by RickHeiges
    Now that it has been a few weeks since the Summit, I wanted to look back at the location "experiment". Convention Center - It seemed to work well for the conference. There were quite a few areas in the area where you could sit down and get some work down or have a discussion. For the larger welcome reception the first night, I really liked the different areas. If you wanted to enjoy the Quiz Bowl, the ballroom area was set up nicely with big screens so that everyone could see and hear. The area right...(read more)

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • iphone com.apple.product-type.tool

    - by John Smith
    Hello I am having trouble compiling a static library for the iPhone. It worked in the past. It keeps saying "target specifies 'com.apple.product-type.tool' but there is no such product for the 'iphoneos' platform. I have rechecked the static compilation flag.

    Read the article

  • URL-&coordinate based screen capturing tool

    - by Enrico Stahn
    Hello, I'm searching for a screen capturing tool which captures areas of an website/web-application based on the url. The very best for me would be an firefox/ie addon with an API accessible via javascript. Example: URL, Coordinates, Filename http://foo.com/project/show/33; rectangle:10,10,50,50; myapp-area1.jpg http://foo.com/project/show/33; rectangle:100,100,150,150; myapp-area2.jpg

    Read the article

  • Follow-up Answers for my Australia Classes

    - by Kalen Delaney
    I was out of the country for the last two weeks of March, delivering classes in Brisbane and Sydney, which were organized by WardyIT . It was a great visit and there were 24 terrific students! As is sometimes (perhaps often?) the case, there were questions posed that I couldn’t answer during class, so here are a couple of follow-up answers. 1. I brought up the fact that SQLS 2012 generates a warning message when there are ‘too many’ Virtual Log Files (VLFs) in a database. (It turns out the message...(read more)

    Read the article

  • New DMF for SQL Server 2008 sys.dm_fts_parser to parse a string

    Many times we want to split a string into an array and get a list of each word separately. The sys.dm_fts_parser function will help us in these cases. More over, this function will also differentiate the noise words and exact match words. The sys.dm_fts_parser can be also very powerful for debugging purposes. It can help you check how the word breaker and stemmer works for a given input for Full Text Search.

    Read the article

  • My Lightning Talk in MP3 format

    - by Rob Farley
    Download it now via http://bit.ly/RFCollation  Lots of people tell me they wish they’d heard my Lightning Talk from the PASS Summit. This was the one that was five minutes, in which I explained Collation using examples comparing US English, UK English and Australian English. At the end, I showed my Arsenal thongs. You can see a picture of them below. There was a visual joke involving the name Arsenal too... After the recordings became available, I asked the PASS legal people, and they said I could do what I liked with my own five-minute set, so long as I didn’t sell it. So I made an MP3. I’ve uploaded it to the LobsterPot Solutions web server, and provided an easy link via http://bit.ly/RFCollation. It’s a link straight to the MP3, and you’re welcome to download it, put it on your iPod, whatever you like. And also feel free to write comments here, to let me know what you think.

    Read the article

  • When I add a database table to a DBML file via LINQ to SQL, I get a slew of compiler errors.

    - by Zian Choy
    Whenever I add a certain table to a DBML file via LINQ to SQL, I get 102 errors in my VB NET project. Some of the errors: Error 1 Attribute 'TableAttribute' cannot be applied multiple times. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 74 2 EMS Reality Check Error 2 'emptyChangingEventArgs' is already declared as 'Private Shared emptyChangingEventArgs As System.ComponentModel.PropertyChangingEventArgs' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 78 17 EMS Reality Check Error 3 '_GroupID' is already declared as 'Private _GroupID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 80 10 EMS Reality Check Error 4 '_ID' is already declared as 'Private _ID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 82 10 EMS Reality Check Any suggestions for getting the table to work with LINQ to SQL will be welcomed. The table's properties: Group ID ID (Primary Key) Contact Title UseGroupAddress InternationalFormat Address1 Address2 City State ZipCode Country Phone Fax EMailAddress Notes DateAdded AddedBy DateChanged ChangedBy Active ExternalReference ChangeCounter PhoneLabel FaxLabel

    Read the article

  • Semi-blocking Transformations in SQL Server Integration Services SSIS

    In a SSIS data flow, there are multiple types of transformations. On one hand you have synchronous and asynchronous transformations, but on the other hand you have non-blocking, semi-blocking and fully-blocking components. In this tip, Koen Verbeeck takes a closer look on the performance impact of semi-blocking transformations in SSIS. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • Another Way to Learn SQL Server

    - by RickHeiges
    Since 2004, I have been on the Advisory Board for several continuing education certificate programs for the University of Washington. You might know some of the other Advisory Board Memebrs - check it out. The Advisory Board meets very infrequently and is asked for "advice" (not direction) on various aspects of the program. Generally speaking, courses that are taught for a degree are non-platform specific. Continuing Education courses and certificate programs are more product focused. As you can...(read more)

    Read the article

  • Database Activity Monitoring Part 2 - SQL Injection Attacks

    If you think through the web sites you visit on a daily basis the chances are that you will need to login to verify who you are. In most cases your username would be stored in a relational database along with all the other registered users on that web site. Hopefully your password will be encrypted and not stored in plain text.

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >