Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 83/1278 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Visual Studio 2008 (C#) with SQL Compact Edition database error: 26

    - by Tommy
    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) I've created a SQL compact database, included it in my application, and can connect to the database fine from other database editors, but within my application im trying using (SqlConnection con = new SqlConnection(Properties.Settings.Default.DatabaseConnection)) { con.Open(); } the connection string is Data Source=|DataDirectory|\Database.sdf I'm stumped, any insight?

    Read the article

  • I cann't use Database in SQL Azure!

    - by Nahid
    Hi, I am trying to use a Database in SQL AZURE. I have installed SQL Server 2008. I can Login SQL Azure and can use master Database. But I can't use other Database. Its Show Error: "USE statement is not supported to switch between databases. Use a new connection to connect to a different Database." How can I use other database?

    Read the article

  • CakePHP model useTable with SQL Views

    - by Chris
    I'm in the process converting our CakePHP-built website from Pervasive to SQL Server 2005. After a lot of hassle the setup I've gotten to work is using the ADODB driver with 'connect' as odbc_mssql. This connects to our database and builds the SQL queries just fine. However, here's the rub: one of our Models was associated with an SQL view in Pervasive. I ported over the view, but it appears using the set up that I have that CakePHP can't find the View in SQL Server. Couldn't find much after some Google searches - has anyone else run into a problem like this? Is there a solution/workaround, or is there some redesign in my future?

    Read the article

  • Determine Last Modification Datetime for an Azure Table

    - by embeddedprogrammer
    I am developing an application which may be hosted on a microsoft sql server, or on Azure SQL, depending upon the end user's wishes. My whole system works fine with the exception of some WCF functions which determine the last modification time of tables using the following technique: SELECT OBJECT_NAME(OBJECT_ID) as tableName, last_user_update as lastUpdate FROM mydb.sys.dm_db_index_usage_stats This query fails in Azure. Is there any analogous way to get table last modification dates from Azure's sql?

    Read the article

  • Recommended book for Sql Server query optimisation

    - by Patrick Honorez
    Even if I have made a certification exam on Sql Server Design and implementation , I have no clue about how to trace/debug/optimise performance in Sql Sever. Now the database I built is really business critical, and getting big, so it is time for me to dig into optimisation, specially regarding when/where to add indexes. Can you recommend a good book on this subject ? (smaller is better :) Just in case: I am using Sql Server 2008. Thanks

    Read the article

  • SQL: Using a CASE Statement to update a 1000 rows at once, how??

    - by SoLoGHoST
    Ok, I would like to use a CASE STATEMENT for this, but I am lost with this. Basically, I need to update a ton of rows, but just on the "position" column. I need to update all "position" values from 0 - count(position) for each id_layout_position column per id_layout column. Here's what I got for a regular update, but I don't wanna throw this into a foreach loop, as it would take forever to do it. I'm using SMF (Simple Machines Forums), so it might look a little different, but the idea is the same, and CASE statements are supported... $smcFunc['db_query']('', ' UPDATE {db_prefix}dp_positions SET position = {int:position} WHERE id_layout_position = {int:id_layout_position} AND id_layout = {int:id_layout}', array( 'position' => $position++, 'id_layout_position' => (int) $id_layout_position, 'id_layout' => (int) $id_layout, ) ); Anyways, I need to apply some sort of CASE on this so that I can auto-increment by 1 all values that it finds and update to the next possible value. I know I'm doing this wrong, even in this QUERY. But I'm totally lost when it comes to CASES. Here's an example of a CASE being used within SMF, so you can see this and hopefully relate: $conditions = ''; foreach ($postgroups as $id => $min_posts) { $conditions .= ' WHEN posts >= ' . $min_posts . (!empty($lastMin) ? ' AND posts <= ' . $lastMin : '') . ' THEN ' . $id; $lastMin = $min_posts; } // A big fat CASE WHEN... END is faster than a zillion UPDATE's ;). $smcFunc['db_query']('', ' UPDATE {db_prefix}members SET id_post_group = CASE ' . $conditions . ' ELSE 0 END' . ($parameter1 != null ? ' WHERE ' . (is_array($parameter1) ? 'id_member IN ({array_int:members})' : 'id_member = {int:members}') : ''), array( 'members' => $parameter1, ) ); Before I do the update, I actually have a SELECT which throws everything I need into arrays like so: $disabled_sections = array(); $positions = array(); while ($row = $smcFunc['db_fetch_assoc']($request)) { if (!isset($disabled_sections[$row['id_group']][$row['id_layout']])) $disabled_sections[$row['id_group']][$row['id_layout']] = array( 'info' => $module_info[$name], 'id_layout_position' => $row['id_layout_position'] ); // Increment the positions... if (!is_null($row['position'])) { if (!isset($positions[$row['id_layout']][$row['id_layout_position']])) $positions[$row['id_layout']][$row['id_layout_position']] = 1; else $positions[$row['id_layout']][$row['id_layout_position']]++; } else $positions[$row['id_layout']][$row['id_layout_position']] = 0; } Thanks, I know if anyone can help me here it's definitely you guys and gals... Anyways, here is my question: How do I use a CASE statement in the first code example, so that I can update all of the rows in the position column from 0 - total # of rows found, that have that id_layout value and that id_layout_position value, and continue this for all different id_layout values in that table? Can I use the arrays above somehow? I'm sure I'll have to use the id_layout and id_layout_position values for this right? But how can I do this?

    Read the article

  • Running sql scripts against an attached database?

    - by Will
    I've got an MDF attached to an instance of Sql Server 2008 Express, and I need to run some sql scripts against it to generate tables, indexes, etc. But I can't figure out how to get this to work. If I load the scripts in Visual Studio, it only allows me to connect to the server and run it against a database. I can't choose a different provider (Microsoft Sql Server Database File), so I can't select my MDF. This leaves me the only option of running the script as individual queries, but that won't work as it appears it doesn't support TSQL CREATE statements. How can I run my sql script against an attached database?

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • SQL installation scripts for WebCenter Content 11g

    - by KJH
    As part of the installation of WebCenter Content 11g (UCM or URM), one of the main functions is to run the Repository Creation Utility (RCU) to establish the database schema and tables.   This is pretty helpful because it runs all the scripts you need to have without having to manually set anything up in the database.   In UCM 10g and earlier, the installation  itself would establish the database tables if you wanted it to.  Otherwise, the SQL scripts were available to be run independently ahead of time.  For DBAs who wanted to understand what was being done to the database for the application, this was helpful for them.  But in 11g, that is all masked now in RCU.  You don't get to see the scripts at all as part of it's establishing the tables.  But if you comb through the directories for RCU, you can track them down.  They are in the  /rcuHome/rcu/integration/contentserver11/sql/ directories.  And to understand the order in which they are run, you can open up the /rcuHome/rcu/integration/contentserver11/contentserver11.xml file and see how they are run there.  The order in which they are run are: contentserverrole.sql contentserveruser.sql intradoc.sql workflow.sql formats.sql users.sql default.sql contentprocedures.sql  If you are installing WebCenter Records (URM), it will run some additional scripts between the formats.sql and users.sql : MetadataSet.sql UIEnhancements.sql RecordsManagement.sql RecordsManagement_default.sql ClassifiedEnhancements.sql ClassifiedEnhancements_default.sql In addition to the scripts being available within the RCU install directories, they are also available from within the Content Server UI.  If you go to Administration -> DataStoreDesign SQL Generation, this page can allow you to download these various SQL scripts.    From here, you can select your particular database type and which components to include.  Several components make changes dynamically to the database when they are enabled, so these scripts give you a way to inspect what is being run during that startup time.  Once selected, click Generate and you now can either view or download the scripts from the Actions menu. DISCLAIMER:  Installations are ONLY supported when done with the Repository Creation Utility.  These scripts are for reference only and not supported to be run manually.

    Read the article

  • ???????/???Oracle SQL Developer??????????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? SQL Developer ??SQL?PL/SQL???????????????GUI??????SQL Developer ????????????·??????????SQL?? SQL?????????PL/SQL??????????????????????????SQL Developer ???????Unit Test?????????????????? SQL Developer ????????·???????????/?????????DBA???? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/111227_E-9_SQL.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/111227_E-9_SQL.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/e-9-sql-developer-1484606-ja.pdf

    Read the article

  • How to create select SQL statement that would produce "merged" dataset from two tables(Oracle DBMS)?

    - by Roman Kagan
    Here's my original question: merging two data sets Unfortunately I omitted some intircacies, that I'd like to elaborate here. So I have two tables events_source_1 and events_source_2 tables. I have to produce the data set from those tables into resultant dataset (that I'd be able to insert into third table, but that's irrelevant). events_source_1 contain historic event data and I have to do get the most recent event (for such I'm doing the following: select event_type,b,c,max(event_date),null next_event_date from events_source_1 group by event_type,b,c,event_date,null events_source_2 contain the future event data and I have to do the following: select event_type,b,c,null event_date, next_event_date from events_source_2 where b>sysdate; How to put outer join statement to fill the void (i.e. when same event_type,b,c found from event_source_2 then next_event_date will be filled with the first date found GREATLY APPRECIATE FOR YOUR HELP IN ADVANCE.

    Read the article

  • My first SQL Saturday

    - by Paul Nielsen
    I’m leaving soon for an exciting journey with a thrilling destination – my first SQL Saturday. So I decided to do it right and I’m taking the Amtrak Acela Express from Boston to New York. I love New York! If you’re headed to SQL Saturday #39, and you love database design, I invite you to come to my session on Temporal Database Designs – how to design a table so it can be queried as of any pervious point in time. The proof of concept code is posted at http://temporalsql.codeplex.com/ . See you there....(read more)

    Read the article

  • T-SQL Tuesday #005: Reporting

    - by Adam Machanic
    This month's T-SQL Tuesday is hosted by Aaron Nelson of SQLVariations . Aaron has picked a really fantastic topic: Reporting . Reporting is a lot more than just SSRS. Whether or not you realize it, you deal with all sorts of reports every day. Server up-time reports. Application activity reports. And even DMVs, which as Aaron points out are simply reports about what's going on inside of SQL Server. This month's topic can be twisted any number of ways, so have fun and be creative! I'm really looking...(read more)

    Read the article

  • Open the SQL Server Error Log with PowerShell

    - by BuckWoody
    Using the Server Management Objects (SMO) library, you don’t even need to have the SQL Server 2008 PowerShell Provider to read the SQL Server Error Logs – in fact, you can use regular old everyday PowerShell. Keep in mind you will need the SMO libraries – which can be installed separately or by installing the Client Tools from the SQL Server install media. You could search for errors, store a result as a variable, or act on the returned values in some other way. Replace the Machine Name with your server and Instance Name with your instance, but leave the quotes, to make this work on your system: [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") $machineName = "UNIVAC" $instanceName = "Production" $sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "$machineName\$instanceName" $sqlServer.ReadErrorLog() Want to search for something specific, like the word “Error”? Replace the last line with this: $sqlServer.ReadErrorLog() | where {$_.Text -like "Error*"} Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • New licensing for SQL Server 2012 and #BISM #Tabular usage

    - by Marco Russo (SQLBI)
    Last week Microsoft announced a new licensing schema for SQL Server 2012. If you are interested in an extensive discussion of the new licensing scheme, Denny Cherry wrote a great blog post about that. I’d like to comment about the new BI Edition license. Teo Lachev already commented about the numbers and I agree with him. I generally like the new licensing mode of SQL 2012. It maintains a very low-entry barrier for SSRS/SSAS/SSIS (Standard Edition). It has a reasonable licensing schema for 20-50...(read more)

    Read the article

  • T-SQL Tuesday #4: I/O, You Know

    - by Kalen Delaney
    It's time for the fourth T-SQL Tuesday , managed this time by Mike Walsh . I almost missed this deadline completely, since I didn't see the announcement at all. I wrote to Adam to ask if there even was an event this month, since I wasn't able to get into my own blog site ( www.SQLBlog.com ) for a week, and he pointed me to Mike's site. I'm wondering if it's this hit and miss for everyone. There is no single location where those people interested in T-SQL Tuesday can find out about it. Do you just...(read more)

    Read the article

  • Open the SQL Server Error Log with PowerShell

    - by BuckWoody
    Using the Server Management Objects (SMO) library, you don’t even need to have the SQL Server 2008 PowerShell Provider to read the SQL Server Error Logs – in fact, you can use regular old everyday PowerShell. Keep in mind you will need the SMO libraries – which can be installed separately or by installing the Client Tools from the SQL Server install media. You could search for errors, store a result as a variable, or act on the returned values in some other way. Replace the Machine Name with your server and Instance Name with your instance, but leave the quotes, to make this work on your system: [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") $machineName = "UNIVAC" $instanceName = "Production" $sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "$machineName\$instanceName" $sqlServer.ReadErrorLog() Want to search for something specific, like the word “Error”? Replace the last line with this: $sqlServer.ReadErrorLog() | where {$_.Text -like "Error*"} Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Trace challenge: a simple requirement

    - by Linchi Shea
    SQL Trace (or SQL Profiler) is no doubt an excellent tool. But its filtering capability is rather primitive, and is very poorly documented. Here is a request that is simple and seems to be rather reasonable. Create a trace to filter for the following: 1. All the update/delete statements, and 2. All the select/insert statements whose CPU column value is greater than 1000 or whose Duration value is greater than 1000 Now, I'm having a tough time creating a trace to meet this simple requirement. Perhaps,...(read more)

    Read the article

  • New SSIS features and enhancements in Denali – a webinar on 28th June in association with Pragmatic Works

    - by jamiet
    Tomorrow I shall be presenting a webinar entitled “New SSIS features and enhancements in Denali”. The webinar is being hosted by Pragmatic Works and you can sign up for it at Pragmatic Works webinars. The webinar will start at 1930BST and you can view the time for your timezone at this link: http://www.timeanddate.com/worldclock/fixedtime.html?msg=New+SSIS+features+and+enhancements+in+Denali&iso=20110628T1830 The webinar was arranged a few months ago and at that time we were hoping that the next Community Technology Preview (CTP) of SQL Server Denali would be available for public consumption; unfortunately it transpires that that is not yet the case and hence I will be presenting new features of CTP1 that was released at the start of this year. If you’re not yet familiar with the new features of SSIS that are coming in the next release of SQL Server then please do come and join the webinar. @Jamiet

    Read the article

  • SQL Saturday #39 in NYC

    - by roman
    This weekend I will be speaking at the NYC SQL Saturday . The whole event was supposed to be BI focused but now the schedule shows a lot of non BI stuff as well. I will be presenting SQL Server 2008 Reporting Services Programming , one of my favorite topics to present on. It seems that the event if fully booked. I'll be coming down on my bike taking scenic roads through MA and CT so I will not make it to the speaker dinner. But the forecast looks good so I am pretty psyched to finally venture out...(read more)

    Read the article

  • Microsoft Assessment and Planning (MAP) Toolkit 5.0 Beta

    - by Lara Rubbelke
    Do you know where SQL Server is installed - everywhere it is installed? Do you really know where SQL Server is installed? Are you looking for a tool that will help you discover any rogue instances so you can better manage these instances? The Beta 2 for the Microsoft Assessment and Planning (MAP) Toolkit 5.0 is now open. Join the beta review program and help influence the development of the toolkit. To participate, register for the MAP Toolkit 5.0 Beta 2 at Microsoft Connect. The MAP Toolkit 5.0...(read more)

    Read the article

  • SQL Azure and Trust Services

    - by BuckWoody
    Microsoft is working on a new Windows Azure service called “Trust Services”. Trust Services takes a certificate you upload and uses it to encrypt and decrypt sensitive data in the cloud. Of course, like any security service, there’s a bit more to it than that. I’ll give you a quick overview of how you can use this product to protect data you send to SQL Azure. The primary issue with storing data in the cloud is that you are in an environment that isn’t under your control – in fact, that’s the benefit of being in a distributed computing environment in the first place. On premises you’re able to encrypt data you don’t want anyone else to see, using various methods such as passwords (not very strong) or certificates (stronger). When you use a certificate, it’s vital that you create (or procure) and protect it yourself. When you store data remotely, regardless of IaaS, PaaS or SaaS, you don’t own the machines where the data lives. That means if you use a certificate from the cloud vendor to encrypt the data, you have to trust that the data won’t be accessed by the vendor. In some cases having a signed agreement with the vendor that they won’t access your data is sufficient, in other cases that doesn’t meet the requirements your system has for security. With the new Trust Services service, the basic process is that you use a Portal to create a Trust Server using policies and other controls. You place a X.509 Certificate you create or procure in that server. Using the Software development Kit (SDK), the developer has access to an Application Layer Encryption Framework to set fields of data they want to encrypt. From there, the data can be stored in SQL Azure as a standard field – only it is encrypted before it ever arrives. The portion of the client software that decrypts the data uses the same service, so the authenticated user sees the data if they are allowed to do so. The data remains encrypted “at rest”.  You can learn more about this product and check it out in the SQL Azure labs at Microsoft Codename "Trust Services"

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >