Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 623/4078 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Deployment Options for AutoVue 20.0 Users

    - by celine.beck
    AutoVue release 20.0 boasts a brand new architecture. As part of this product rearchitecture, AutoVue can now be deployed either as a desktop deployment to serve the needs of individual users in their personal productivity; or in a Client / Server deployment for those that require connections to enterprise applications / back-end systems. The most common question that we hear from our customers about this new architecture is the following: "Is AutoVue Desktop Version still part of release 20.0 and if so, what is the difference between AutoVue Desktop Version and the Desktop deployment of AutoVue release 20.0?" A detailed answer to these questions is provided in a very complete article entitled Understanding Deployment Options for AutoVue 19.3 Desktop Version users upgrading to AutoVue 20.0 (note 1058254.1) which was posted on My Oracle Support. Is AutoVue Desktop Version still part of AutoVue 20.0? Yes, AutoVue Desktop Version 20.0 is still available to customers and partners, as a maintenance release of AutoVue 19.3. As such, it will not contain any of the new capabilities featured in AutoVue release 20.0. All format enhancements and new format support have been added to release 20.0 Desktop Version though. What is the different between AutoVue Desktop Version 20.0 and the Desktop Deployment of AutoVue release 20.0? AutoVue 20.0 Desktop deployment works like the AutoVue Desktop version. It is installed as a standalone product on each user's machine and runs a local instance of AutoVue. The AutoVue 20.0 Desktop deployment includes all new features, formats and performance enhancements included in release 20.0 (walkthrough capability, improved compare, ...) What deployment options are available to AutoVue 19.3 Desktop Version customers? AutoVue Desktop Version users can evolve at their own pace to the new AutoVue platform. With release 20.0, customers can opt to: Option 1: Stay on AutoVue Desktop Version 20.0 Option 2: Migrate to AutoVue and select the desktop deployment method Option 3: Migrate to AutoVue and select the Client/Server deployment method What is the Client / Server deployment of AutoVue 20.0? The Client/Server deployment has AutoVue installed on a server, to which local client machines connect to access and view documents. AutoVue 20.0 Client Server Deployment allows users to leverage the new online/offline capabilities in release 20.0 and easily switch between online and offline modes of operation. With the Client/Server deployment, customers also get a complete, open and standards-based set of integration tools that allows them to tie AutoVue to any enterprise applications to provide users with a consistent view of data and business objects and expand workflow automation to document-based processes. Related articles: AutoVue Release 20.0 Now Available, New Walkthrough Capability in AutoVue 20.0, Watch the AutoVue 20.0 Release Webcast, April 27 at 12pm EST

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Event 4098, 0x80070533 Logon failure: account currently disabled?

    - by Windos
    Having started to upgrade our PCs to Windows 7 we have noticed that we are getting group policy warnings in Event Viewer such as: "The user 'Word.qat' preference item in the 'a_Office2007_Users {A084A37B-6D4C-41C0-8AF7-B891B87FC53B}' Group Policy object did not apply because it failed with error code '0x80070533 Logon failure: account currently disabled.' This error was suppressed." 15 of these warnings appear every two hours on every Windows 7 PC, most of which are to do with core office applications and two are for plug-ins to out document management system. These warnings aren't afecting the users, but it would be nice to track down the source of them before we rollout Win7 to the rest of the Organisation. Any ideas as to where the login issue could be comming from (All users are connecting to the domain and proxy, etc fine)?

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • virtual directory make file copy operation extremely slow on UNC Path (IIS 7.5 bug?)

    - by user144737
    When i create a website/virtual directory pointing to UNC path, its make our file copy extremely slow on the UNC path. 6 seconds for file copy (~13 M) on the UNC path without any virtual directory/website pointing to it. over 1 mins. for file copy (same files ~13M) on the same UNC path with virtual directory/website pointing to it. All file copy operation run on web server side. Our setting as below: Web server - Windows Server standard R2 2008 / IIS 7.5 File server - Windows Server standard 2003 I have tested this case on 3 servers (Windows Server standard R2 2008 / IIS 7.5) and got same result. I also tested this case on 2 windows 2003 / IIS 6, it won't slow down the file copy. Is it IIS 7.5 bug? any patch/hotfix to solve this case? Thank you. Gordon

    Read the article

  • Enforce SSIS naming conventions using BI-xPress

    - by jamiet
    A long long long time ago (in 2006 in fact) I published a blog post entitled Suggested Best Practises and naming conventions in which I suggested a bunch of acronyms that folks could use to prefix object names in their SSIS packages, thus allowing easier identification of those objects in log records, here is a sample of some of those suggestions: If you have adopted these naming conventions (and I am led to believe that a bunch of people have) then you might like to know that you can now check for adherence to these conventions using a tool called BI-xPress from Pragmatic Works. BI-xPress includes a feature called the Best Practices Analyzer that scans your packages and assess them according to some rules that you specify. In addition Pragmatic Works have made available a collection of these rules that adhere to the naming conventions I specified in 2006 You can download this collection however I recommend you first read the accompanying article that demonstrates the capabilities of the Best Practices Analyzer. Pretty cool stuff. @Jamiet

    Read the article

  • Setup MSSQL replication with peer to peer topology: problem setting up Conflict Detection

    - by Roel
    Hi, I'm setting up a SQL Replication strategy, using MSSQL2008 with peer-to-peer publications (2 servers, each one subscribes to the other). I followed this HOWTO from MSDN, and the setup seems to be working fine: add a record to one table on server A, query on server B shows the new record. So far, so good. So far I only have one table 'Templates': Id PK (calculated field) NodeId int default 1/2 (Server A = 1, Server B = 2) LocalId int autoid Name nvarchar(100) Now, I would like to enable 'Conflict detection', which should be enabled by default. But every time I try to save the 'Conflict Detection' feature in the Publication Properties I get the following error: Cannot save Peer conflict detection properties. An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) Program Location: at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand) at Microsoft.SqlServer.Replication.ReplicationObject.ExecCommand(String commandIn) at Microsoft.SqlServer.Replication.TransPublication.SetPeerConflictDetection(Boolean enablePeerConflictDetection, Int32 peerOriginatorID) at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveP2PConflictDetection() at Microsoft.SqlServer.Management.UI.PubPropSubscriptionOptions.SaveProperties(ExecutionMode& executionResult) Column name 'Id' does not exist in the target table or view. Changed database context to 'TestDB'. (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=1911&LinkId=20476 Server Name: SERVER_A Error Number: 1911 Severity: 16 State: 1 Line Number: 2 Program Location: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Now, I googled the hell out of this error, and nothing shows up. I also can't seem to find out what the exact target table of the error "Column name 'Id' does not exist..." is. Has anyone every done this successfully? Am I missing something? Having this setup without conflict detection feels pretty useless... EDIT OK, so after some more research and setting up with different databases etc, I found out that the calculated 'Id' column of the Templates table is the culprit. I don't know why, but the replication doesn't seem to allow calculated columns (which are also primary key). It works now too, without the 'Id' column, and using the NodeId and LocalId as a combined PK. So now the question is, why isn't it allowed to have a calculated column as PK for replication with conflict detection?

    Read the article

  • Getting It Right The First Time

    - by andyleonard
    Introduction This post is the seventeenth part of a ramble-rant about the software business. The current posts in this series are: Goodwill, Negative and Positive Visions, Quests, Missions Right, Wrong, and Style Follow Me Balance, Part 1 Balance, Part 2 Definition of a Great Team The 15-Minute Meeting Metaproblems: Drama The Right Question Software is Organic, Part 1 Metaproblem: Terror I Don't Work On My Car A Turning Point Human Doings Everything Changes This post is about getting software right...(read more)

    Read the article

  • MOSS 2007 SP2 - Error while provisioning SSP.

    - by Tim
    This is a brand new installation I am trying to provision the first SSP for MOSS and I keep getting the following error: (Provisioning failed: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)) The error also keeps appearing in the Application Event Log as event ID 7888. Google searches tell me its a connection between the Sharepoint server and the SQL server however, this is a production SQL Server which has several databases on it that do not experience any problems. These databases also include the Central Admin and Web Application databases for Sharepoint which are all working fine. Any guidance would be greatly appreciated. Thanks in advance.

    Read the article

  • Speaking at PASS 2012… Exciting and Scary… As usual…

    - by drsql
    Edit: As I reread this, I felt I should clarify.. As usual refers mostly to the "Scary" part. I have a lot of stage fright that I have to work through. And it is always exciting to be picked. I have been selected this year at the PASS Summit 2012 to do two sessions, and they are both going to be interesting. Pre-Con: Relational Database Design Workshop - Abstract Triggers: Born Evil or Misunderstood? - Abstract The pre-con session entitled Relational Database Design Workshop will be (at least) the...(read more)

    Read the article

  • SSRS 2008 & MOSS 2007 Alternate Access Mapping Problem

    - by Mauro
    I have a MOSS Server with SSRS 2008 Ent Ed configured in Sharepoint Integrated mode. It all works well as http://servername:88/ on the default host header. MOSS works fine using the external host name too on the intranet AAM field (http://site.domain.com/) however SSRS fails on the same url with the message: An unexpected error occurred while connecting to the report server. Verify that the report server is available and configured for SharePoint integrated mode. I think the issue is further complicated by our Windows 2008 infrastructure in which we've never been able to get SPN's working for Kerberos. SQL Server however, is on the same machine so I dont think it is a kerberos double hop issue. Extra info MOSS/SSRS are on a VM running Windows 2003 R2 VM is hosted on Win2008 HyperV DC is on Windows 2008 SBS

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

  • Windows 7 Office 2007 GPO install

    - by Scott
    I have a GPO that works fine for installing Office 2007 Pro Plus on Vista and XP but when it installs Office on Windows 7 somehow the office key does not get entered via the customized msp, and needs to be entered manually. Has anyone else run into this? Any suggestions for a fix? Its defeats the purpose of remote unattended install if I then have to run around entering the stupid key. edit: I am sorry I should have specified I also have the config.xml file customized already. I have it set to display level none, completion no, suppress modal to yes accept eula to yes, the key put in and the company name and the username variable (%USERNAME%).

    Read the article

  • Book Review: Pro SQL Server 2008 Relational Database Design and Implementation

    - by Alexander Kuznetsov
    Investing in proper database design is a very efficient way to cut maintenance costs. If we expect a system to last, we need to make sure it has a good solid foundation - high quality database design. Surely we can and sometimes do cut corners and save on database design to get things done faster. Unfortunately, such cutting corners frequently comes back and bites us: we may end up spending a lot of time solving issues caused by poor design. So, solid understanding of relational database design is...(read more)

    Read the article

  • postfix relaying all mail through office365 problems

    - by amrith
    This is a rather long question with a long list of things tried and travails so please bear with me. The summary is this. I am able to relay email from ubuntu through office365 using postfix; the configuration works. It only works as one of the users; more specifically the user who authenticates against office365 is the only valid "from" More details follow. I have a machine in Amazon's cloud on which I run a bunch of jobs and would like to have statuses mailed over to me. I use office365 at work so I want to relay mail through office365. I'm most familiar with postfix so I used that as the MTA. Configuration is ubuntu 12.04LTS; I've installed postfix and mail-utils. For this example, let me say my company is "company.com" and the machine in question (through an elastic IP and a DNS entry) is called "plaything.company.com". hostname is set to "plaything.company.com", so is /etc/mailname On plaything, I have the following users registered alpha, bravo, and charlie. I have the following configuration files. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = plaything.company.com, localhost.company.com, , localhost myhostname = plaything.company.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = [smtp.office365.com]:587 sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes As the machine is called plaything.company.com I went through the exercise of registering all the appropriate DNS entries to make office365 recognize that I owned plaything.company.com and allowed me to create a user called [email protected] in office365. In office365, I setup [email protected] as having another email address of [email protected]. Then, I made the following sender_canonical [email protected] [email protected] I created a sasl_passwd file that reads: smtp.office365.com [email protected]:123456password123456 let's just say that the password for [email protected] is 1234...456 With all this setup, login as alpha and mail [email protected] Cc: Subject: test test and the whole thing works wonderfully. email gets sent off by postfix, TLS works like a champ, authenticates as daemon@... and [email protected] in Office365 gets an email message. The issue comes up when logged in as bravo to the machine. sender is [email protected] and office365 says: status=bounced (host smtp.office365.com[132.245.12.25] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) this is because I'm trying to send mail as bravo@... and authenticating with office365 as daemon@.... The reason it works with alpha@... is because in office365, I setup [email protected] as having another email address of [email protected]. In Postfix Relay to Office365, Miles Erickson answers the question thusly: Don't send mail to Office365 as a user from your Office365-hosted e-mail domain. Use a subdomain instead, e.g. [email protected] instead of [email protected]. It wouldn't hurt to set up an SPF record for services.mydomain.com or whatever you decide to use. Don't authenticate against mail.messaging.microsoft.com as an Office365 user. Just connect on port 25 and deliver the mail to your domain as any foreign SMTP agent would do. OK, I've done #1, I have those records on DNS but for the most part they are not relevant once Office365 recognizes that I own the domain. Here are those records: CNAME records: - msoid.plaything.company.com - autodiscover.plaything.company.com MX record: - plaything.company.com (plaything-company-com.mail.protection.outlook.com) TXT record: - plaything.company.com (v=spf1 include:spf.protection.outlook.com -all) I've tried #2 but no matter what I do, office365 just blows away the connection with "not authenticated". I can try even a simple telnet to port 25 and attempt to send and it doesn't work. 250 BY2PR01CA007.outlook.office365.com Hello [54.221.245.236] 530 5.7.1 Client was not authenticated Connection closed by foreign host. Is there someone out there who has this kind of a configuration working where multiple users on a linux machine are able to relay mail using postfix through office365? There has to be someone out there doing this who can tell me what is wrong with my setup ...

    Read the article

  • Introducing SQLPeople - the Blog Series!

    - by andyleonard
    Introduction The first 50.5 weeks of 2010 have been interesting, to say the least. My experiences in 2010 can best be summed up in a single word: educational. I've learned a lot this year! One important thread wove its way through my 2010 experiences... Relationships Are Everything How we interact defines community. Relationships define community. Our community is more than the sum of our members. Trust and respect are the capital of community. And just like money, this capital can be invested, exchanged,...(read more)

    Read the article

  • The penultimate audit trigger framework

    - by Piotr Rodak
    So, it’s time to see what I came up with after some time of playing with COLUMNS_UPDATED() and bitmasks. The first part of this miniseries describes the mechanics of the encoding which columns are updated within DML operation. The task I was faced with was to prepare an audit framework that will be fairly easy to use. The audited tables were to be the ones directly modified by user applications, not the ones heavily used by batch or ETL processes. The framework consists of several tables and procedures...(read more)

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Network issues with DNS not being found

    - by Anriëtte Combrink
    Hi there This is exactly like how our network looks like: Single server with a network router Everything is setup, but I cannot connect our Macs under the Login Options - Join... to this server. Our server's name is Toolbox and I have tried Toolbox.local, Toolbox.private, prepended the afp:// protocol to the name, but nothing, our Macs just don't want to connect this way. Our router has DHCP and gives out all the IP addresses naturally, would I have to add Toolbox.local to the DNS on the router and like it via static internal IP to the server? Our Macs keep giving the following error while trying to join the Network Account Server: Unable to add server Could not resolve the address (2200) What am I doing wrong?

    Read the article

  • Geek City: SQL Server 2014 In-Memory OLTP (“Hekaton”) Whitepaper for CTP2

    - by Kalen Delaney
    Last week at the PASS Summit in Charlotte, NC, the update of my whitepaper for CTP2 was released. The manager supervising the paper at Microsoft told me that David DeWitt himself said some very nice things about the technical quality of the paper, which was one of the most ego enhancing compliments I have ever gotten! Unfortunately, Dr. DeWitt said those things at his “After-the-keynote” session, not in the keynote that was recorded, so I only have my manager’s word for it. But I’ll take what I can...(read more)

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Rackspace Cloud Server DNS Add SPF Records

    - by user625435
    I've setup my new LAMP server on Rackspace Cloud and the Basic A, C and MX DNS setup is no problem. I need to add an SPF record for a project I am migrating over to this new server that allows emails from a 3rd party server and I can't seem to figure out how to do this. There doesn't seem to be an option to add a TXT record in my Rackspace Cloud Server interface and I installed the BIND DNS on my Apache server, but I am not sure how to get that to been seen, etc.

    Read the article

  • It seems another season of previews is upon us

    - by Enrique Lima
    Originally posted on: http://geekswithblogs.net/enriquelima/archive/2013/06/26/it-seems-another-season-of-previews-is-upon-us.aspxThe past couple of weeks have been packed with teasers and updates. But here they go. Visual Studio Update 3: http://www.microsoft.com/en-us/download/confirmation.aspx?id=39305 Visual Studio 2013 and TFS 2013 Preview: http://www.microsoft.com/visualstudio/eng/2013-downloads SQL Server 2014 CTP1 : http://technet.microsoft.com/en-us/evalcenter/dn205290.aspx Windows Server 2012 R2 Preview: http://technet.microsoft.com/en-us/evalcenter/dn205286.aspx Windows 8.1 : http://preview.windows.com

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >