Search Results

Search found 37457 results on 1499 pages for 'sql 2008 r2'.

Page 252/1499 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Loop through non-integer rows using SQL

    - by Jesse
    I know how to accomplish my task with .NET, but I wanted to do this just in SQL. I need to loop through all of the rows where the primary key is somewhat arbitrary. It can be a number or a series of letters, and probably any number of unusual things. I know I could do something like this... DECLARE @numRows INT SET @numRows = (SELECT COUNT(pkField) FROM myTable) DECLARE @I INT SET @I = 1 WHILE (@I <= @numRows) BEGIN --Do what I need to here SET @I = @I + 1 END ...if my rows were indexed in a contiguous fashion, but I don't know enough about SQL to do that if they're not. I keep coming across the use of "cursors," but I come across just as much reading about avoiding cursors. I found this SO solution but I'm not sure if that's what I'm needing? I appreciate any ideas.

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • Best pattern for storing (product) attributes in SQL Server

    - by EdH
    We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access. The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications). The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row. An example of the attributes Table may look like this: ProdID AttributeName AttributeValue 123 Color Blue 123 FittingSize 1.25 123 Certification AS1111 123 Certification EE2212 123 Certification FM.3 456 Pipe 11 678 Color Red 999 Certification AE1111 ... Note: Attribute name would likely come from a lookup table or enum. So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes. If anyone has any suggestions or a better pattern for this type of data, please let me know. Thanks! -Ed

    Read the article

  • SQL Cache Dependency not working with Stored Procedure

    - by pjacko
    Hello, I can't get SqlCacheDependency to work with a simple stored proc (SQL Server 2008): create proc dbo.spGetPeteTest as set ANSI_NULLS ON set ANSI_PADDING ON set ANSI_WARNINGS ON set CONCAT_NULL_YIELDS_NULL ON set QUOTED_IDENTIFIER ON set NUMERIC_ROUNDABORT OFF set ARITHABORT ON select Id, Artist, Album from dbo.PeteTest And here's my ASP.NET code (3.5 framework): -- global.asax protected void Application_Start(object sender, EventArgs e) { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString; System.Data.SqlClient.SqlDependency.Start(connectionString); } -- Code-Behind private DataTable GetAlbums() { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["UnigoConnection"].ConnectionString; DataTable dtAlbums = new DataTable(); using (SqlConnection connection = new SqlConnection(connectionString)) { // Works using select statement, but NOT SP with same text //SqlCommand command = new SqlCommand( // "select Id, Artist, Album from dbo.PeteTest", connection); SqlCommand command = new SqlCommand(); command.Connection = connection; command.CommandType = CommandType.StoredProcedure; command.CommandText = "dbo.spGetPeteTest"; System.Web.Caching.SqlCacheDependency new_dependency = new System.Web.Caching.SqlCacheDependency(command); SqlDataAdapter DA1 = new SqlDataAdapter(); DA1.SelectCommand = command; DataSet DS1 = new DataSet(); DA1.Fill(DS1); dtAlbums = DS1.Tables[0]; Cache.Insert("Albums", dtAlbums, new_dependency); } return dtAlbums; } Anyone have any luck with getting this to work with SPs? Thanks!

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Not able add .mdf file in App_data

    - by vinod R
    HI I am not able to add .mdf file in App_data(vs 2008 web developer). If i right click on App_data and try to add new item and select sql server file and click OK. I am getting error as "Connections to Sql Server files(*.mdf) require SQL Server Express 2005 to function properly.Please verify the installation of the component or download from the url:http://www.microsoft.com/Sqlserver/2005/" But i have installed SQL Server Express 2005 still it is giving the same error(i have installed sql server after installing vs 2008) Please help me

    Read the article

  • How would I duplicate the Rank function in a Sql Server Compact Edition SELECT statement?

    - by AMissico
    It doesn't look like SQL Server Compact Edition supports the RANK() function. (See Functions (SQL Server Compact Edition) at http://msdn.microsoft.com/en-us/library/ms174077(SQL.90).aspx). How would I duplicate the RANK() function in a SQL Server Compact Edition SELECT statement. (Please use Northwind.sdf for any sample select statements, as it is the only one I can open with SQL Server 2005 Management Studio.)

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!! Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" } To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet UPDATE: THis post prompted Chad Miller to write a post describing his Powershell add-in that utilises a SSIS API to do exporting of packages. Go take a read here: http://sev17.com/2011/02/importing-and-exporting-ssis-packages-using-powershell/

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • Is there a way to add AD LDS users to an AD Domain Group or allow them domain security rights?

    - by Tom
    I have a web application in which our outside customers need access to run transactions (stored procs on Sql Server) on our domain. We have looked into LDS to keep these users separate from our domain. The problem we are having is allowing the LDS users the AD security rights to access these stored procs. For administration purposes we would like to use an AD group for each transaction (stored proc) which has access to execute. Is there a way to add LDS users to this AD group or allow them the security rights to do this? We have setup LDS and can authenicate an AD user thru to runs these transactions. LDS is running on Server 08 R2. AD is also Server 08 R2. Thanks.

    Read the article

  • SCCM for mobile device management returns 404 on /devicemgmt/server.resource

    - by Dan
    We have a new Windows Server 2008 R2 machine onto which we have installed SCCM SP2 followed by the R2 package. We have enabled a mobile device management point and enabled distribution points to support mobile devices as per http://technet.microsoft.com/en-us/library/bb680634.aspx We have also installed the Mobile Device Management Client on to a Windows Mobile 6.1 device. The client on the device fails to connect to the server. Our investigation so far has led us to the URL /devicemgmt/server.resource. However, looking in IIS on the server shows no such URL (in fact nothing apart from the aspnet_client directory) and visiting the URL with a browser returns 404. WebDav is enabled on the Default Web Site in IIS. BITS is installed on the server. Can anyone confirm whether enabling mobile device management will add visible directories to IIS and if so why it might be failing in our case?

    Read the article

  • Domain controller failed to restore using windows backup tools

    - by Peilin
    Domain controller failed to restore using windows backup tools One Issue: One 08 R2 domain controller with fully daily backup(only one controller in this company) is out of services due to hardware issue. The below two methods i tried to recover to the new purchasing server,but it is fail. 1)First Method: Using the windows 2008 R2 CD to boot and carry out recover from backup. Everything is OK, but after reboot it will come out blue screen and restart again. 2)Second Method: a)Install the OS in this new server b) Reboot the server to DSRM. c)Using the Windows Backup Tools to restore the system states only After reboot, it will come out the blue screen error and restart again. I know this is may be the different hardware issue, but how to resolve? Or can we only restore the AD services not whole system status? Any suggestion?

    Read the article

  • Restoring VM's in Hyper-V from a (somewhat) failed VSS backup...

    - by DeliriumTremens
    I attempted to do a backup of our virtualization server when moving from Hyper-V 2008 to R2. I changed the registry settings to register Hyper-V VSS with Windows Server Backup, and sent the backup on it's way while I went on to other things. Apparently the VSS portion didn't back-up the VM's like I had hoped, and after updating Windows Server and Hyper-V to R2, I was unable to restore the VM's correctly. The backup completed and I have all of the files from before the update backed up, but is there any way to restore the VM's from the backup? The vhd's all seem to have old modified dates, and when bringing the machines up from these vhd's they have very old settings. I have found a bunch of avhd files (named according to GUID), but I'm not sure if I can create a VM with the old vhd and merge the snapshots with it.

    Read the article

  • Citrix has issues resolving network shares

    - by George
    We are having this weird issue with our Citrix (version 4.5) server (sitting on Windows 2003 r2), where a couple users have issues resolving single shared network drive. We use a logon script to map all shared drives. The weird part is that of 3 shared drives, users can access 2, but the 3rd one goes to the old server (even though the logon script points to the new server). And that issues is limited to a few users. I had them log off and re-loggin to no success. It happens just in Citrix. The file server, that is being accessed, is Windows 2008 R2. Like I said we use a logon script to map the network drive. I understand I might be a little confusing, I will gladly reword the post.

    Read the article

  • Identity R2 - Experts Podcast Series

    - by Tanu Sood
    To follow up on the Identity Management R2 launch, a series of podcasts were recorded with subject matter experts from customer organizations, our partners and Oracle’s PM team to discuss key trends, R2 capabilities, implementation best practices and more. Below is a roll-up of the podcast series that is available on Fusion Middleware radio. R2 Podcasts:   ·         Designing the Next-Generation Identity Platform Vadim Lander, Oracle Highlights: Common architecture model, integration, interoperability and the driving factors behind R2 innovation IT Departments are shifting their Identity Management strategy to be able to support mobile, cloud and social applications. Oracle has anticipated this shift and has built a product roadmap to take advantage of this focus. Join Vadim as he discusses the design strategy behind the latest 11gR2 release and talks about how IDM services have to evolve to meet this new challenge.   ·         BETA Customer Perspective on R2 Ravi Meduri, Kaiser Permanente Highlights: R2 scalability and high availability In this podcast Ravi discusses the new features in 11gR2 that he is most interested in, including High Availability options for Access Management, multi-datacenter architecture, and what it was like working with the Oracle product team during the BETA program.   ·         Partner Perspective on R2 Rex Thexton, PricewaterhouseCoopers Highlights: Usability Enhancements for Users and Administrators A lot of new usability features went into the 11gR2 release making this the most business friendly IDM release to date. In this podcast Rex Thexton, Managing Director from PwC, talks about some of the new UI changes for both end users and administrators, and also about the new connector creation framework.   Access Request Updates in R2 Marc Boroditsky, Oracle Highlights: Access request User Interface innovations A lot of changes have been made to the Access Request user interface in the latest version of Oracle Identity Manager 11gR2. A real focus has been put on making the request process more business user friendly, and a lot of new customization capability has been added for the IT administrators. Hear Marc discuss the updated UI, and explain how administrators will be able to customize OIM to meet their company's requirements   ·         Oracle Optimized System for Oracle Unified Directory (OOS4OUD) Nick Kloski, Oracle Highlights: New Optimized System configuration for Unified Directory One of the new features in 11gR2 is the availability of an Optimized System configuration for Oracle Unified Directory. Oracle engineers installed the OUD software onto off the shelf hardware and then created a performance tuned configuration. Join us as we talk to Nick Kloski, Infrastructure Solutions Manager, all about the testing process and the resulting performance metrics.   Privileged Account Management Mark Wilcox, Oracle Highlights: Oracle Privileged Account Manager key capabilities, use cases The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. Hear what Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager has to say about the capabilities of the offering in this podcast.   ·         Browser-based User Interface (UI) Customization Clayton Donley, Oracle Highlights: Benefits of Durable UI Configuration framework Business users need user interfaces that are not only friendly but also easily customizable. However the downside of any customization project is the cost and complexity involved in developing, testing, deploying and managing custom code. In this podcast, we examine how a new capability in Oracle Identity Management around browser based UI customization can reduce costs and complexity of customization while simplifying self service integration with corporate portal strategies.   ·         Simplifying Mobile and Social Sign-On Dan Killmer, Oracle Highlights: Secure mobile sign-on and consumption of social identities with Oracle Access Management The proliferation of mobile devices has spurred a new trend where employees tend to bring their own mobile devices to work and access corporate applications the same way they would access from a desktop or laptop. In this podcast, we examine how Oracle's latest innovation in Identity Management around Mobile and Social Sign On can simplify security and access management challenges posed by the widespread adoption of mobile devices in the enterprise. ·         Enabling Your Business with IDM R2 Scott Bonnell, Oracle Highlights: Self service, mobile access, personalization Gone are the days when Identity Management was just about stopping unauthorized users in their tracks. Identity Management if done right, can also enable your business. Join Scott Bonnell as he discusses how the IDM 11gR2 release enables the enterprise by providing self service, personalization and mobile access to corporate resources.

    Read the article

  • An error occured synchronizing windows with time.windows.com

    - by Killrawr
    Okay so I've tried stopping/registering the win32tm service on this Windows Server 2008 Enterprise Computer. C:\Users\Administrator>net stop w32time The Windows Time service is stopping. The Windows Time service was stopped successfully. C:\Users\Administrator>w32tm /unregister The following error occurred: Access is denied. (0x80070005) C:\Users\Administrator>w32tm /unregister W32Time successfully unregistered. C:\Users\Administrator>w32tm /register W32Time successfully registered. C:\Users\Administrator>net start w32time The Windows Time service is starting. The Windows Time service was started successfully. (Source : http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/9bdfc2cc-4775-4435-8868-57d214e1e3ba/) And I get this error from the Date and Time, Internet Time tab (After also following the steps here). I've even tried the Atomic Time Clock Worldtimeserver and I get the error The following error occurred: The specified module could not be found. (0x8007007E). I've also disabled the Windows Firewall, that might of been blocking the synchronization. I've done a file scan with sfc /scannow that came back with no errors. C:\Users\Administrator>sfc /scannow Beginning system scan. This process will take some time. Beginning verification phase of system scan. Verification 100% complete. Windows Resource Protection did not find any integrity violations. C:\Users\Administrator> But I'm not having much luck. Is there anyway lo possibly solve this? or is the time.windows.com servers unsupported? because the software is from 2008? (I really don't know :/), My ping result to time.windows.com C:\Users\Administrator>ping time.windows.com Pinging time.microsoft.akadns.net [65.55.21.22] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 65.55.21.22: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), And tracert result C:\Users\Administratortracert time.windows.com Tracing route to time.microsoft.akadns.net [65.55.21.24] over a maximum of 30 hops: 1 1 ms <1 ms <1 ms 192.168.1.1 2 32 ms 31 ms 32 ms be2-100.bras1wtc.wlg.vf.net.nz [203.109.129.113] 3 31 ms 32 ms 31 ms be5-100.ppnzwtc01.wlg.vf.net.nz.129.109.203.in-a ddr.arpa [203.109.129.114] 4 31 ms 31 ms 31 ms gi0-2-0-3.ppnzwtc01.wlg.vf.net.nz.180.109.203.in -addr.arpa [203.109.180.210] 5 31 ms 31 ms 30 ms gi0-2-0-3.ppnzwtc02.wlg.vf.net.nz [203.109.180.2 09] 6 167 ms 166 ms 166 ms ip-141.199.31.114.VOCUS.net.au [114.31.199.141] 7 175 ms 175 ms 175 ms microsoft.com.any2ix.coresite.com [206.223.143.1 43] 8 177 ms 180 ms 176 ms xe-7-0-2-0.by2-96c-1a.ntwk.msn.net [207.46.42.17 6] 9 205 ms 205 ms 204 ms xe-10-0-2-0.co1-96c-1b.ntwk.msn.net [207.46.45.3 1] 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 ^C And nslookup C:\Users\Administrator>nslookup time.windows.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: time.microsoft.akadns.net Address: 65.55.21.22 Aliases: time.windows.com

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >