Search Results

Search found 1104 results on 45 pages for 'grant trevor'.

Page 29/45 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQL Server schema-owner permissions

    - by Andrew Bullock
    if i do: CREATE SCHEMA [test] AUTHORIZATION [testuser] testuser doesn't seem to have any permissions on the schema, is this correct? I thought as the principal that owns the schema, you had full control over it? What permission do i need to grant testuser so that it has full control over the test schema only? Edit: by "full control" i mean the ability to CRUD tables, views, sprocs etc Thanks

    Read the article

  • error adding reference in .net 3.5

    - by d daly
    Hi Im trying to add a reference to a dll i downloaded which I want to use for some sftp work. as soon as i add it i get "could not load file or asse3mble....failed to grant minimum permission requests" Is this to do with my own account permissions? thanks DD

    Read the article

  • Necessary rights to be able to add a column with ALTER TABLE ADD column_name

    - by Sorin Comanescu
    Hi, Could somebody point out the necessary rights to do something like ALTER TABLE myTable ADD myColumn int NOT NULL CONSTRAINT [Constraint_name] DEFAULT ((0)) ? I assumed grant alter on myTable to [user] was enough but I'm getting the error message The UPDATE permission was denied on the object 'myTable', database 'x', schema 'dbo'. Could UPDATE rights be needed because of the DEFAULT constraint? Thanks.

    Read the article

  • Service MSFTESQL not found.

    - by hrishi
    While installating MS-CRM:I got sql server errors:- 1Service MSFTESQL not found. The specified service does not exists as an installed service -----but i can see the service is running automatically and help file says "verify that you have local administrator permissions for the computer on which sql server is running. And if necessary grant the needed permissions." how to achive this.

    Read the article

  • Problems writing a query to join two tables

    - by Psyche
    Hello, I'm working on a script which purpose is to grant site users access to different sections of the site menu. For this I have created two tables, "menu" and "rights": menu - id - section_name rights - id - menu_id (references column id from menu table) - user_id (references column id from users table) How can a query be written in order to get all menu sections and mark the ones where a given user has access. I'm using PHP and Postgres. Thank you.

    Read the article

  • File paths on Silverlight applications

    - by jose
    I have two silverlight applications. One produces XML files (models) that are used for the other to read. The XML files are uploaded to a specific (abosulte for now) folder. So, I need a solution to grant access by these two applications, a Models folder. Right now I'm using absolute paths and developing in ASP.NET Dev server. What would be the best way to accomplish this, thinking on IIS approach? Regards,

    Read the article

  • Rails multi level model security

    - by rballz
    I have the need to do the following in Rails to mirror a desktop application: a User and an Office 'owns' a record, if you don't own the record on a user or office level you're kicked into the public realm. user gets read,write,delete to the model record office gets read/write/delete to the model record other or public gets read/write/delete to the model record e.g. UserA owns a model record with read/write/delete OfficeA owns a model with read/write other/public gets read I was wondering if a plugin/gem existed to grant this functionality?

    Read the article

  • How to hide folder in SSRS Report Builder?

    - by tnafoo
    When I click File - Open on Report Builder, I can see a list of folders under Report Server Home root folder. But I don't want end-user to see any of the folders under root unless I grant them access. I tried hiding and removing permission on the folders but they are still visible in the root folder.

    Read the article

  • SQl server 2008 permission and encryption

    - by paranjai
    i have made columns in some of the tables encrypted in sql server 2008. Now as i am a db owner i have the access to encode and decode the data using the symmetric key and certificate. But some other users have only currently datareader and datawriter rights ,and when they execute any SP referring the logic which uses the key and certificate "User does has not right on the certificate to execute". What rights / exact permission should i grant them just to solve this problem

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • SQL Server Split() Function

    - by HighAltitudeCoder
    Title goes here   Ever wanted a dbo.Split() function, but not had the time to debug it completely?  Let me guess - you are probably working on a stored procedure with 50 or more parameters; two or three of them are parameters of differing types, while the other 47 or so all of the same type (id1, id2, id3, id4, id5...).  Worse, you've found several other similar stored procedures with the ONLY DIFFERENCE being the number of like parameters taped to the end of the parameter list. If this is the situation you find yourself in now, you may be wondering, "why am I working with three different copies of what is basically the same stored procedure, and why am I having to maintain changes in three different places?  Can't I have one stored procedure that accomplishes the job of all three? My answer to you: YES!  Here is the Split() function I've created.    /******************************************************************************                                       Split.sql   ******************************************************************************/ /******************************************************************************   Split a delimited string into sub-components and return them as a table.   Parameter 1: Input string which is to be split into parts. Parameter 2: Delimiter which determines the split points in input string. Works with space or spaces as delimiter. Split() is apostrophe-safe.   SYNTAX: SELECT * FROM Split('Dvorak,Debussy,Chopin,Holst', ',') SELECT * FROM Split('Denver|Seattle|San Diego|New York', '|') SELECT * FROM Split('Denver is the super-awesomest city of them all.', ' ')   ******************************************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE xtype = 'TF'       AND name = 'Split'       ) BEGIN       DROP FUNCTION Split END GO   CREATE FUNCTION Split (       @InputString                  VARCHAR(8000),       @Delimiter                    VARCHAR(50) )   RETURNS @Items TABLE (       Item                          VARCHAR(8000) )   AS BEGIN       IF @Delimiter = ' '       BEGIN             SET @Delimiter = ','             SET @InputString = REPLACE(@InputString, ' ', @Delimiter)       END         IF (@Delimiter IS NULL OR @Delimiter = '')             SET @Delimiter = ','   --INSERT INTO @Items VALUES (@Delimiter) -- Diagnostic --INSERT INTO @Items VALUES (@InputString) -- Diagnostic         DECLARE @Item                 VARCHAR(8000)       DECLARE @ItemList       VARCHAR(8000)       DECLARE @DelimIndex     INT         SET @ItemList = @InputString       SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       WHILE (@DelimIndex != 0)       BEGIN             SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)             INSERT INTO @Items VALUES (@Item)               -- Set @ItemList = @ItemList minus one less item             SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)             SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       END -- End WHILE         IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString       BEGIN             SET @Item = @ItemList             INSERT INTO @Items VALUES (@Item)       END         -- No delimiters were encountered in @InputString, so just return @InputString       ELSE INSERT INTO @Items VALUES (@InputString)         RETURN   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON Split TO UserRole1 --GRANT SELECT ON Split TO UserRole2 --GO   The syntax is basically as follows: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C AND TABLE2.Id IN (SELECT * FROM Split(@IdList, ',')) @IdList is a parameter passed into the stored procedure, and the comma (',') is the delimiter you have chosen to split the parameter list on. You can also use it like this: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C HAVING COUNT(SELECT * FROM Split(@IdList, ',') Similarly, it can be used in other aggregate functions at run-time: SELECT MIN(SELECT * FROM Split(@IdList, ','), <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C GROUP BY <fields> Now that I've (hopefully effectively) explained the benefits to using this function and implementing it in one or more of your database objects, let me warn you of a caveat that you are likely to encounter.  You may have a team member who waits until the right moment to ask you a pointed question: "Doesn't this function just do the same thing as using the IN function?  Why didn't you just use that instead?  In other words, why bother with this function?" What's happening is, one or more team members has failed to understand the reason for implementing this kind of function in the first place.  (Note: this is THE MOST IMPORTANT ASPECT OF THIS POST). Allow me to outline a few pros to implementing this function, so you may effectively parry this question.  Touche. 1) Code consolidation.  You don't have to maintain what is basically the same code and logic, but with varying numbers of the same parameter in several SQL objects.  I'm not going to go into the cons related to using this function, because the afore mentioned team member is probably more than adept at pointing these out.  Remember, the real positive contribution is ou are decreasing the liklihood that your team fails to update all (x) duplicate copies of what are basically the same stored procedure, and so on...  This is the classic downside to duplicate code.  It is a virus, and you should kill it. You might be better off rejecting your team member's question, and responding with your own: "Would you rather maintain the same logic in multiple different stored procedures, and hope that the team doesn't forget to always update all of them at the same time?".  In his head, he might be thinking "yes, I would like to maintain several different copies of the same stored procedure", although you probably will not get such a direct response.  2) Added flexibility - you can use the Split function elsewhere, and for splitting your data in different ways.  Plus, you can use any kind of delimiter you wish.  How can you know today the ways in which you might want to examine your data tomorrow?  Segue to my next point. 3) Because the function takes a delimiter parameter, you can split the data in any number of ways.  This greatly increases the utility of such a function and enables your team to work with the data in a variety of different ways in the future.  You can split on a single char, symbol, word, or group of words.  You can split on spaces.  (The list goes on... test it out). Finally, you can dynamically define the behavior of a stored procedure (or other SQL object) at run time, through the use of this function.  Rather than have several objects that accomplish almost the same thing, why not have only one instead?

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • That’s a wrap! Almost, there’s still one last chance to attend a SQL in the City event in 2012

    - by Red and the Community
    The communities team are back from the SQL in the City multi-city US Tour and we are delighted to have met so many happy SQL Server professionals and Red Gate customers. We set out to run a series of back-to-back events in order to meet, talk to and delight as many SQL Server and Red Gate enthusiasts as possible in 5 different cities in 11 days. We did it! The attendees had a good time too and 99% of them would attend another SQL in the City event in 2013 – so it seems we left an impression. There were a range of topics on the event agenda, ranging from ‘The Whys & Hows of Continuous Integration’, ‘Database Maintenance Essentials’, ‘Red Gate tools – The Complete Lifecycle’, ‘Automated Deployment: Application And Database Releases Without The Headache’, ‘The Ten Commandments of SQL Server Monitoring’ and many more. Videos and slides from the events will be posted to the event website in November, after our last event of 2012. SQL in the City Seattle – November 5 Join us for free and hear from some of the very best names in the SQL Server world. SQL Server MVPs such as; Steve Jones, Grant Fritchey, Brent Ozar, Gail Shaw and more will be presenting at the Bell Harbor conference center for one day only. We’re even taking on board some of the recent attendee-suggestions of how we can improve the events (feedback from the 65% of attendees who came to our US tour events), first off we’re extending the drinks celebration in the evening! Rather than just a 30 minute drink and run, attendees will have up to 2 hours to enjoy free drinks, relax and network in a fantastic environment amongst some really smart like-minded professionals. If you’re interested in expanding your SQL Server knowledge, would like to learn more about Red Gate tools, get yourself registered for the last SQL in the City event of 2012. It’s free, fun and we’re very friendly! I look forward to seeing you in Seattle on Monday November 5. Cheers, Annabel.

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • sharepoint 2007 access denied when accessing user profiles via ssp

    - by user22215
    Guys I have a really strange problem in regards to sharepoint mysites today I go into user profiles and properties in order to setup a property all of a sudden I get access denied. First off I know that I'm logged in with the correct account after the access denied I decided to click on personalization services and permissions I than get An unhandled exception occurred in the user interface.Exception Information: Cannot complete this action. I'm not seeing anything in the server application logs either. So have any of you guys seen this before is there some kind of way to grant a user account the manage profiles right permission using stsadm. BTW all other fucntions of the ssp are working fine so my question is if the user profiles and my sites of a ssp tanks how do you repair that portion of the ssp? BTW the user accounts that I'm using are site collection owners and also they have full control at the web application level. I actually ran across this interesting post but this does not really help my problem. http://blog.tylerholmes.com/2008/09/access-denied-for-site-collection.html

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >