Search Results

Search found 582 results on 24 pages for 'integrity'.

Page 15/24 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Does SQL Server Compact not support keywords 'Foreign Key References' in Create Table or am I making a mistake?

    - by johnmortal
    I keep getting errors using Sql Compact. I have seen that it is possible to create table constraints in compact edition as shown here. And according to the documents found here it provides "Full referential integrity with cascading deletes and updates". So am I really not allowed to do the following or am I making a mistake? I keep getting complaints from sql server compact edition that the constaint is not valid, though it works fine on express edition. CREATE TABLE [A] (AKey int NOT NULL PRIMARY KEY); CREATE TABLE [B] (AKey int NOT NULL FOREIGN KEY REFERENCES A(AKey));

    Read the article

  • INSERT SELECT Statement and Rollback SQL

    - by Juan Perez
    Im Working on a creation of a query who uses INSERT SELECT statement using MS SQL Server 2008: INSERT INTO TABLE1 (col1, col2) SELECT col1, col2 FROM TABLE2 Right now the excecution of this query is inside a transaction: Pseudocode: try { begin transaction; query; commit; } catch { rollback; } If TABLE2 has around 40m of rows, at the moment of making the insert on the TABLE1, if there is an error in the middle of the INSERT, will the INSERT SELECT statement make a rollback itself or I need to use a transaction to preserve data integrity? It is necessary to use a transaction? or SQL SERVER it self uses a transaction for this type of sentences.

    Read the article

  • App Engine: how would you... snapshotting entities

    - by Andrew B.
    Let's say you have two kinds, Message and Contact, related by a db.ListProperty of keys on Message. A user creates a message, adds some contacts as recipients, and emails the message. Later, the user deletes one of the contact entities that was a recipient of the message. Our application should delete the appropriate Contact entity, but we want to preserve the original recipient list for the message that was sent for the user's records. In essence, we want a snapshot of the message entity at the time it was sent. If we naively delete the contact entity, though, we lose snapshot integrity; if not, we are left with an invalid key. How would you handle this situation, either in controller logic or model changes? class User(db.Model): email = db.EmailProperty(required=True) class Contact(db.Model): email = db.EmailProperty(required=True) user = db.ReferenceProperty(User, collection_name='contacts') class Message(db.Model): recipients = db.ListProperty(db.Key) # contacts sender = db.ReferenceProperty(User, collection_name='messages') body = db.TextProperty() is_emailed = db.BooleanProperty(default=False)

    Read the article

  • WebService Security

    - by LauzPT
    Hello, I'm developing an project, which consists in a webservice and a client application. It's a fair simple scenario. The webservice is connected to a database server, and the client consumes from the webserver in order to get information retrieved from the database. The thing is: 1. The client application can only display data after a previous authentication; 2. All the data transferred between Web Service and clients must be confidential; 3. Data integrity shouldn’t be compromised; I'm wondering what is the best way to achieve these requirements. The first thing I thought about, was sending the server a digital signature containing a client certificate, to be stored in the server, and used as comparison for authentication. But I investigated a little about webservice security, and I'm no longer certain that this is the best option. Can anyone give me an opinion about this? TIA

    Read the article

  • JSF - database character encoding

    - by wheelie
    Hi there, I have a Java Web application using GlassFish 3, JSF2.0 (facelets) and JPA (EclipseLink). The problem I'm facing, is that if I'm saving entities to the database with the update() method, String data loses integrity; '?' is shown instead of some characters. The server, pages and database is/are configured to use UTF-8. After I post form data, the next page shows the data correctly. Furthermore it "seems" in debug that the String property of the current entity stores the correct value too. Dunno if NetBeans debug can be trusted; might be that it decodes correctly, however it's incorrect. Any help would be appreciated, thanks in advance! Daniel

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • Which .NET exception to throw for invalid database state?

    - by jslatts
    I am writing some data access code and I want to check for potentially "invalid" data states in the database. For instance, I am returning a widget out of the database and I only expect one. If I get two, I want to throw an exception. Even though referential integrity should prevent this from occurring, I do not want to depend on the DBAs never changing the schema. I would like to use the System.IO.InvalidDataException, except that I am not dealing with a file stream so it would be misleading. I ended up going with a generic applicationexception. Anyone have a better idea?

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • Restricting deletion with NHibernate

    - by FrontSvin
    I'm using NHibernate (fluent) to access an old third-party database with a bunch of tables, that are not related in any explicit way. That is a child tables does have parentID columns which contains the primary key of the parent table, but there are no foreign key relations ensuring these relations. Ideally I would like to add some foreign keys, but cannot touch the database schema. My application works fine, but I would really like impose a referential integrity rule that would prohibit deletion of parent objects if they have children, e.i. something similar 'ON DELETE RESTRICT' but maintained by NHibernate. Any ideas on how to approach this would be appreciated. Should I look into the OnDelete() method on the IInterceptor interface, or are there other ways to solve this? Of course any solution will come with a performance penalty, but I can live with that.

    Read the article

  • Extracting Data Daily from MySQL to a Local MySQL DB

    - by Sunny Juneja
    I'm doing some experiments locally that require some data from a production MySQL DB that I only have read access to. The schemas are nearly identical with the exception of the omission of one column. My goal is to write a script that I can run everyday that extracts the previous day's data and imports it into my local table. The part that I'm most confused about is how to download the data. I've seen names like mysqldump be tossed around but that seems a way to replicate the entire database. I would love to avoid using php seeing as I have no experience with it. I've been creating CSVs but I'm worried about having the data integrity (what if there is a comma in a field or a \n) as well as the size of the CSV (there are several hundred thousand rows per day).

    Read the article

  • addslashes and addcslahes

    - by theband
    I was seeing today the addslashes and addcslashes in php.net, but did not get the point on what is the difference between them and what are the characters escaped in these two. <?php $originaltext = 'This text does NOT contain \\n a new-line!'; $encoded = addcslashes($originaltext, '\\'); $decoded = stripcslashes($encoded); //$decoded now contains a copy of $originaltext with perfect integrity echo $decoded; //Display the sentence with it's literal \n intact ?> If i comment the $decoded variable and echo $encoded, i get the same value which is in the original string. Can anyone clearly explain me the difference and use of these two.

    Read the article

  • ASP.MVC ModelBinding Behaviour

    - by OldBoy
    This one has me stumped, despite the numerous posts on here. The scenario is a basic MVC(2) web application with simple CRUD operations. Whenever the edit form is submitted and the UpdateModel() called, an exception is thrown: System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException was unhandled by user code This occurs against a DropDownList value which is a foreign key on the entity table. However, there is another DropDownList list on the form, representing another foreign key, which does not throw the error (unsurprisingly). Changing the property values manually inside the Edit Action: Recipe recipe = repository.GetRecipe(int.Parse(formValues["recipeid"])); recipe.CategoryId = Convert.ToInt32(formValues["CategoryId"].ToString()); recipe.Page = int.Parse(formValues["Page"].ToString()); recipe.PublicationId=Convert.ToInt32(formValues["PublicationId"].ToString()); Allows the CategoryId and Page properties to be updated, and then the error is thrown on the PublicationId. All of the referential integrity is checked an the same in the db and the dbml. Any light shed on this would be most welcome.

    Read the article

  • MySQL Full-text Search Workaround for innoDB tables

    - by Rob
    I'm designing an internal web application that uses MySQL as its backend database. The integrity of the data is crucial, so I am using the innoDB engine for its foreign key constraint features. I want to do a full-text search of one type of records, and that is not supported natively with innoDB tables. I'm not willing to move to MyISAM tables due to their lack of foreign key support and due to the fact that their locking is per table, not per row. Would it be bad practice to create a mirrored table of the records I need to search using the MyISAM engine and use that for the full-text search? This way I'm just searching a copy of the data and if anything happens to that data it's not as big of a deal because it can always be re-created. Or is this an awkward way of doing this that should be avoided? Thanks.

    Read the article

  • We failed trying database per custom installation. Plan to recover?

    - by Fedyashev Nikita
    There is a web application which is in production mode for 3 years or so by now. Historically, because of different reasons there was made a decision to use database-per customer installation. Now we came across the fact that now deployments are very slow. Should we ever consider moving all the databases back to single one to reduce environment complexity? Or is it too risky idea? The problem I see now is that it's very hard to merge these databases with saving referential integrity(primary keys of different database' tables can not be obviously differentiated). Databases are not that much big, so we don't have much benefits of reduced load by having multiple databases.

    Read the article

  • Is it possible to use os.walk over SSH?

    - by LeoB
    I'm new to Python so forgive me if this is basic, I've searched but can't find an answer. I'm trying to convert a Perl script into Python (3.x) which connects to a remote server and copies the files in a given directory to the local machine. Integrity of the transfer is paramount and there are several steps built-in to ensure a complete and accurate transfer. The first step is to get a complete listing of the files to be passed to rsync. The Perl script has the following lines to accomplish this: @dir_list = `ssh user@host 'find $remote_dir -type f -exec /bin/dirname {} \\;'`; @file_list = `ssh user@host 'find $remote_dir -type f -exec /bin/basename {} \\;'`; The two lists are then joined to create $full_list. Rather than open two separate ssh instances I'd like to open one and use os.walk to get the information using: for remdirname, remdirnames, remfilesnames in os.walk(remotedir): for remfilename in remfilesnames: remfulllist.append(os.path.join(remdirname, remfilename)) Thank you for any help you can provide.

    Read the article

  • Mercurial adding new file gives no match found error

    - by Ivo
    I have a strange problem with updating Mercurial. Everytime when I add a file to my repository and then update another location of the repository (for example with in CI process), the error "no match found" occures. Then when I remove to whole folder and clone it again there are no problems and the new added file(s) are there. Updating and removing doesnt give problems When I do "a" Verify the following is shown: data/test.txt.i@54: missing revlog! 54: empty or missing test.txt test.txt@54: b80de5d13875 in manifests not found 3 integrity errors encountered! (first damaged changeset appears to be 54) Any idea what could be causing this?

    Read the article

  • Normalize database or not? Read only MyISAM table, performance is the main priority (MySQL)

    - by hello
    I'm importing data to a future database that will have one, static MyISAM table (will only be read from). I chose MyISAM because as far as I understand it's faster for my requirements (I'm not very experienced with MySQL / SQL at all). That table will have various columns such as ID, Name, Gender, Phone, Status... and Country, City, Street columns. Now the question is, should I create tables (e.g Country: Country_ID, Country_Name) for the last 3 columns and refer to them in the main table by ID (normalize...[?]), or just store them as VARCHAR in the main table (having duplicates, obviously)? My primary concern is speed - since the table won't be written into, data integrity is not a priority. The only actions will be selecting a specific row or searching for rows that much a certain criteria. Would searching by the Country, City and/or Street columns (and possibly other columns in the same search) be faster if I simply use VARCHAR?

    Read the article

  • When open-sourcing a live Rails app, is it dangerous to leave the session key secret in source contr

    - by rspeicher
    I've got a Rails app that's been running live for some time, and I'm planning to open source it in the near future. I'm wondering how dangerous it is to leave the session key store secret in source control while the app is live. If it's dangerous, how do people usually handle this problem? I'd guess that it's easiest to just move the string to a text file that's ignored by the SCM, and read it in later. Just for clarity, I'm talking about this: # Your secret key for verifying cookie session data integrity. # If you change this key, all old sessions will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. ActionController::Base.session = { :key => '_application_session', :secret => '(long, unique string)' } And while we're on the subject, is there anything else in a default Rails app that should be protected when open sourcing a live app?

    Read the article

  • Delete rows out of table that is innerjoined and unioned with 2 others

    - by jonathan
    We have 3 tables (table1, table2, table3), and I need to delete all the rows from table1 that have the same ID in table2 OR table3. To see a list of all of these rows I have this code: ( select table2.ID, table2.name_first, table2.name_last, table2.Collected from table2 inner join table1 on table1.ID = table2.ID where table2.Collected = 'Y' ) union ( select table3.ID, table3.name_first, table3.name_last, table3.Collected from table3 inner join table1 on table1.ID = table3.ID where table3.Collected = 'Y' ) I get back about 200 rows. How do I delete them from table1? I don't have a way to test if my query will work, so I'm nervous about modifying something I found online and potentially deleting data (we do have backups, but I'd rather not test out their integrity). TIA!

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • How To Personalize the Windows Command Prompt

    - by Matthew Guay
    Command line interfaces can be downright boring, and always seem to miss out on the fresh coats of paint liberally applied to the rest of Windows.  Here’s how to add a splash of color to Command Prompt and make it unique. By default, Windows Command Prompt is white text on a black background. It get’s the job done, but maybe you want to add some color to it.   To get an overview of what we can do with the color command, let’s enter: color /? So, to get the color you want, enter color then the option for the background color followed by the font color.  For example, let’s make an old-fashioned green on black look by entering: color 02   There are a bunch of different combinations you can do, like this black background with red text. color 04 You can’t mess it up too much.  The color command won’t let you set both the font and the background to the same color, which would make it unreadable.  Also, if you want to get back to the default settings, just enter: color Now we’re back to plain-old black and white. Personalize Command Prompt Without Commands If you’d prefer to change the color without entering commands, just click on the Command Prompt icon in the top left corner of the window and select Properties. Select the Colors tab, and then choose the color you want for the screen text and background.  You can also enter your own RGB color combination if you want.   Here we entered the RGB values to get a purple background color like Ubuntu 10.04. Back in the Properties dialog, you can also change your Command Prompt font from the font tab.  Choose any font you want, as long as the one you want is one of the three listed here. Customizations you make via the Properties dialog are saved and will be used any time you open Command Prompt, but any customizations you make with the Color command are only for that session. Conclusion Whether you want to make your command prompt bright enough to cause a sunburn or old-style enough to scare a mainframe operator, with these settings, you can make Command Prompt a bit more unique.   Similar Articles Productive Geek Tips Use "Command Prompt Here" in Windows VistaVerify the Integrity of Windows Vista System FilesKeyboard Ninja: Scrolling the Windows Command Prompt With Only the KeyboardRun a Command as Administrator from the Windows 7 / Vista Run boxStart an Application Assigned to a Specific CPU in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification

    - by pinaldave
    Here is the conversion I had right after I had posted my earlier blog post about Download Microsoft SQL Server 2012 RTM Now. Rajesh: So SQL Server is available for me to download? Pinal: Yes, sure check the link here. Rajesh: It is trial do you know when it will be available for everybody? Pinal: I think you mean General Availability (GA) which is on April 1st, 2012. Rajesh: I want to have head start with SQL Server 2012 examination and I want to know every single Exam 70-461: Querying Microsoft SQL Server 2012 This exam is intended for SQL Server database administrators, implementers, system engineers, and developers with two or more years of experience who are seeking to prove their skills and knowledge in writing queries. Exam 70-462: Administering Microsoft SQL Server 2012 Databases This exam is intended for Database Professionals who perform installation, maintenance, and configuration tasks as their primary areas of responsibility. They will often set up database systems and are responsible for making sure those systems operate efficiently. Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 The primary audience for this exam is Extract Transform Load (ETL) and Data Warehouse Developers.  They are most likely to focus on hands-on work creating business intelligence (BI) solutions including data cleansing, ETL, and Data Warehouse implementation. Exam 70-464: Developing Microsoft SQL Server 2012 Databases This exam is intended for database professionals who build and implement databases across an organization while ensuring high levels of data availability. They perform tasks including creating database files, creating data types and tables,  planning, creating, and optimizing indexes, implementing data integrity, implementing views, stored procedures, and functions, and managing transactions and locks. Exam 70-465: Designing Database Solutions for Microsoft SQL Server 2012 This exam is intended for database professionals who design and build database solutions in an organization.  They are responsible for the creation of plans and designs for database structure, storage, objects, and servers. Exam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012 The primary audience for this exam is BI Developers.  They are most likely to focus on hands-on work creating the BI solution including implementing multi-dimensional data models, implementing and maintaining OLAP cubes, and creating information displays used in business decision making Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 The primary audience for this exam is the BI Architect.  BI Architects are responsible for the overall design of the BI infrastructure, including how it relates to other data systems in use. Looking at Rajesh’s passion, I am motivated too! I may want to start attempting the exams in near future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >