Search Results

Search found 90811 results on 3633 pages for 'hyper v server 2012 r2'.

Page 100/3633 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • (0xC03A0014) Failed to add device 'Microsoft Virtual Hard Disk'

    - by maniargaurav
    We had Windows 2008 SP2 Server. It was crashed due to mother board problem. After we got new motherboard we have installed Windows 2008 R2. Now when we try to attach Old VHD File we are getting following issue. Failed to add device 'Microsoft Virtual Hard Disk'. Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' TestVM': Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' (0xC03A0014). (Virtual machine ID 5626AAB2-C21C-48FF-8B70-40671CBC573B)

    Read the article

  • Fonts doesn't render in Chrome or IE in Windows Server 2008

    - by Martin Carlsson
    When I visit, for example, http://www.bolagsverket.se from a user account on this Windows 2008 Server, Chrome displays the site but all the text is gone. When I try in IE it's even worse, it doesn't even load the page, I jsut end up with this: http://dl.pixelstore.se/image/0y1f0y0w1J39 The fonts used for this site is (from CSS): font-family:Frutiger,Frutiger Linotype,Univers,DejaVu Sans Condensed,Liberation Sans,Nimbus Sans L,Geneva,Helvetica Neue,Helvetica,Arial,Tahoma,sans-serif If I edit the CSS through Chrome Developer Tools, and erase all fonts until Arial it suddenly works. The strange thing is that everything works fine from the administrator account, it's just (all) the user accounts that doesn't work. My guess is that Chrome/IE is asking for the fonts but somehow they are restricted in the user account. Instead of just ignoring the fonts they can't find they try to render them anyway. Any clue?

    Read the article

  • Cannot connect to a 2008 sql server named instance hosted in a azure virtual machine

    - by emardini
    When I try to connect to a named instance in a SQL SERVER hosted in a azure VM I get this message: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) The problem is the sql browser is not working properly, when I start the sql browser service it closes after a few seconds and the event log says "There are no instances of SQL Server or SQL Server Analysis Services." But I do have a named instance, I can connect locally to this instance. I've re-installed sql browser and the instance but ii does not work. The host is a azure virtual machine windows server 2008 datacenter. Please help. Thank you

    Read the article

  • windows server backup 2008 R2 - what is generating all the change data?

    - by bobjandal
    We have a small relatively idle windows server 2008 R2 installation that does basic filesharing and exchange for about 10 not very active users. When running a windows server backup, the incremental data daily is about 20GB. This is not coming from users shared files, nor from changes in their mailbox sizes. The total size of the installation is 249GB, which is mostly old files. Where is all this data coming from, and how can I reduce it ? Using online backup of the vhd file from the backup is taking a while because of this daily change. Is there some way I can at least see what files are changing and contributing to this data ? Options I can think of but am not sure about: 1) pagefile churning - altho the backup does not include the pagefile, perhaps the changed blocks left behind are included ? 2) logs or something ? but the installation size stays the same every day 3) should I zero free space using sdelete before backing up perhaps ?

    Read the article

  • I have a server running Windows 2008 R2 Core and it needs to hosts either SVN or GIT

    - by Jason Adams
    The server allocated for our cross platform projects (both Mac & PC) source repository is running Win2008R2 Core. We're really happy with its stability and we aren't interested in moving over to non-core. We need to get either SVN or GIT installed on the aforementioned box in the shortest amount of steps. We know the advantages/disadvantages of both systems. That being said, we don't care which one we use, we're just are looking for the path of least resistance on setting up a repository on a machine running R2 core.

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • DNS to \\Server\ wrong - \\Server.company.local\ works fine

    - by JimmyClif
    I had a little network glitch and since then one of my servers shows up wrong at some workstations when typing in \\server\. Example: On workstationA I go to Explorer and and type \\server\ and it brings me to our copier at 192.168.2.101. \\server.company.local\ gets me to the right place at 192.168.2.252. Ping with server pings 192.168.2.252 - same correct result with ping server.company.com nslookup also shows correct result with both. reverse lookup by ip is correct also. I flush the DNS on the workstation and the error still occurs. reboot same result. At that point I give up and start remapping the shares to \\server.company.local\share just to get the user back working... DNS Server has correct entries for that server. Can access the server via \\server\ on dns server, all looks fine. Eventually the workstation figures it out by itself and \\server\ works again but my life wouldn't be as stressful if I had a clue what happened or how to fix it myself. Thanks for your time looking and answering.

    Read the article

  • Cannot Install/Start MySQL Server

    - by Peezy Bro
    Okay, I decided to migrate from MySQL Server 5.5.37 to Percona Server 5.6. I ended up removing MySQL Server by the following: sudo apt-get --purge remove mysql-server mysql-server-5.5 mysql-server-core-5.5 mysql-client mysql-client-core-5.5 mysql-common sudo apt-get autoremove sudo apt-get autoclean rm -rf /var/lib/mysql rm -rf /etc/mysql Now here is my problem, when I try to install MySQL Server 5.6 it goes through its process and when it asks me for a password, it comes up with Cannot set MySQL "root" password. After it "installs" MySQL wont start up and I get permission denied?. Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. brandon@brandon-DB:~$ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libmldbm-perl libnet-daemon-perl libplrpc-perl libsql-statement-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 upgraded, 10 newly installed, 0 to remove and 35 not upgraded. Need to get 0 B/8,955 kB of archives. After this operation, 96.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... Selecting previously unselected package mysql-common. (Reading database ... 167760 files and directories currently installed.) Preparing to unpack .../mysql-common_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libmysqlclient18:amd64. Preparing to unpack .../libmysqlclient18_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libdbi-perl. Preparing to unpack .../libdbi-perl_1.630-1_amd64.deb ... Unpacking libdbi-perl (1.630-1) ... Selecting previously unselected package libdbd-mysql-perl. Preparing to unpack .../libdbd-mysql-perl_4.025-1_amd64.deb ... Unpacking libdbd-mysql-perl (4.025-1) ... Selecting previously unselected package libterm-readkey-perl. Preparing to unpack .../libterm-readkey-perl_2.31-1_amd64.deb ... Unpacking libterm-readkey-perl (2.31-1) ... Selecting previously unselected package mysql-client-core-5.5. Preparing to unpack .../mysql-client-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-client-5.5. Preparing to unpack .../mysql-client-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-core-5.5. Preparing to unpack .../mysql-server-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-5.5. (Reading database ... 168116 files and directories currently installed.) Preparing to unpack .../mysql-server-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server. Preparing to unpack .../mysql-server_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-server (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Setting up libdbi-perl (1.630-1) ... Setting up libdbd-mysql-perl (4.025-1) ... Setting up libterm-readkey-perl (2.31-1) ... Setting up mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.19-0ubuntu6) ... No apport report written because the error message indicates its a followup error from a previous failure. Processing triggers for ureadahead (0.100.0-16) ... Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have all my database/tables dumped and on a seperate HDD. This is also a Dev Machine and not my main Production Machine. I also backed up the MySQL_Config and MySQL_Data.

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Use ROLL UP Clause instead of COMPUTE BY

    - by pinaldave
    Note: This upgrade was test performed on development server with using bits of SQL Server 2012 RC0 (which was available at in public) when this test was performed. However, SQL Server RTM (GA on April 1) is expected to behave similarly. I recently observed an upgrade from SQL Server 2005 to SQL Server 2012 with compatibility keeping at SQL Server 2012 (110). After upgrading the system and testing the various modules of the application, we quickly observed that few of the reports were not working. They were throwing error. When looked at carefully I noticed that it was using COMPUTE BY clause, which is deprecated in SQL Server 2012. COMPUTE BY clause is replaced by ROLL UP clause in SQL Server 2012. However there is no direct replacement of the code, user have to re-write quite a few things when using ROLL UP instead of COMPUTE BY. The primary reason is that how each of them returns results. In original code COMPUTE BY was resulting lots of result set but ROLL UP. Here is the example of the similar code of ROLL UP and COMPUTE BY. I personally find the ROLL UP much easier than COMPUTE BY as it returns all the results in single resultset unlike the other one. Here is the quick code which I wrote to demonstrate the said behavior. CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','South Delhi',8 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','North Delhi',5.5) INSERT INTO tblPopulation VALUES('India', 'Delhi','West Delhi',7.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Bangalore',9.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Belur',2.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Manipal',1.5) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Mumbai',30) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Pune',20) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nagpur',11 ) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nashik',6.5) GO SELECT Country,[State],City, SUM ([Population (in Millions)]) AS [Population (in Millions)] FROM tblPopulation GROUP BY Country,[State],City WITH ROLLUP GO SELECT Country,[State],City, [Population (in Millions)] FROM tblPopulation ORDER BY Country,[State],City COMPUTE SUM([Population (in Millions)]) BY Country,[State]--,City GO After writing this blog post I continuously feel that there should be some better way to do the same task. Is there any easier way to replace COMPUTE BY? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BPA scan did not complete for one or more servers

    - by Hossein Aarabi
    In Windows Server 2012 RTM, I am trying to run the BPA. It fails, saying "BPA scan did not complete for one or more servers" Try #1: Try #2: So, I decided to enable the Turn on Script Execution (with Allow all scripts) in Local Group Policy Editor, now I get a very nice exception message :) Clicking on ignore button, BPA logs the following error message: Try #3: So, I decided to go ahead and set the execution policy for all the scopes in PowerShell to unrestricted. Again no luck. What is going on?

    Read the article

  • Firewall predefined rule property cannot be modified

    - by Sami-L
    Using a stand alone Windows Server 2012 Standard edition (no Active Directory), I Tried to establish a simple remote desktop with a custom port number, but could not modify the port number in the Firewall inbound rule, when I open the inbound property I get the next message: "This is a predefined rule and some of its properties cannot be modified" I have tried to set it up like this: New rule - predifined drop down list - Remote Desktop - check mark rules - Allow the connection. but still get "This is a predefined rule and some of its properties cannot be modified" Thank you in advance.

    Read the article

  • Remote Desktop event ID 20499. No noticeable issues

    - by Marc05
    I get a warning event with ID 20499 for TerminalServices-RemoteConnectionManager.The error is: Remote Desktop Services has taken too long to load the user configuration from server \server.domain.home for user administrator. Yet, I don't see any issues (I'm guessing because that user is on the machine local). Why am I getting this warning? I'm on Windows Server 2012 R2 connecting from a Windows 8.1.

    Read the article

  • How to install ADFS 3.0 in standalone mode?

    - by user18044
    I've installed Windows 2012 R2 and enabled the ADFS (3.0?) feature. After installation, it asks to configure ADFS, but this step requires a user account that is a domain administrator, as it wants to create certificate containers and SPN records. In ADFS 2.0, you could install in standalone mode which required only local admin rights, storing everything in the Windows Internal Database. If this still possible with the latest version? If so, how do I configure ADFS in standalone mode?

    Read the article

  • ARR troubleshooting 502.3 / WinHttp tracing on Server 2012

    - by nachojammers
    I have the following scenario: 3 windows server 2012 virtual servers, all with IIS 8: 1 server with Application Request Routing 3 2 servers with the web applications that the ARR server routes to I am getting intermittent 502 3 12002 errors. Following this guide http://www.iis.net/learn/extensions/troubleshooting-application-request-routing/troubleshooting-502-errors-in-arr I have identified that I need to trace using netsh the WinHttp/WebIO providers to get to the real error code that is mapped to the 12002 error code. I run the trace as the article suggests: netsh trace start scenario=internetclient capture=yes persistent=no level=verbose tracefile=c:\temp\net.etl When analysing the output of the netsh traces, I don't get the level of information that the article suggests I should. Specifically I only get the following types of entry in the trace viewed using netmon: WINHTTP_MicrosoftWindowsWinHttp:Stopping WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Starting WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Queue Overlapped IO Thread Action... I certainly don't get anything detailed enough that would help me understand why am getting any timeouts. Is there any reason why Server 2012 wouldn't trace the WinHttp API to the level I need? Thanks

    Read the article

  • Fix: Connections to SQL Server 2005 on Windows Vista suddenly stop working

    - by NTulip
    On my Vista machine at work, applications and the SQL Server Management Console work fine connecting to SQL Server 2005. Sometimes they are ok for weeks at a time, sometime for hours and then they stop connecting. I've tried everything to get it to work including the installation of SPII and running the user provisioning tool without any luck. The only way to fix it was to restart. The Error: Connections are refused with the standard error message: Cannot connect to SERVER_NAME\INSTANCE_NAME ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 The Fix: Stop and restart the Sql Server Browser, Sql Server integration, SQL Server Active Directory Helper services. Works like a charm.

    Read the article

  • EFI vs MBR - Installing Windows Server 2008 R2 or 2012 on 8TB

    - by Riaan de Lange
    I'm having some difficulty installing Windows Server 2008 R2 and Windows Server 2012 on an Intel Server platform. The server specs is as follows: Intel Grizzly Pass Server System - R2308GZ4GC 2x Intel Xeon 2620 - 2.0 GHZ - BX80621E52620 132 GB of Memory REG-DIMM - TS1GKR72V6H 4x Seagate Constellation ES 2TB 3.5" 7200rpm 6GB/S - ST32000645NS Intel Big Laurel 4CH 6G SAS RAID 512MB - RS2BL040 On the Intel RAID Controller Setup, I have setup the HDD to be in RAID-0 - for testing purposes. (Ultimately configured in RAID-5) So, the total size of HDD space I can use is 7.6 TB something... When I install the Server OS's, they don't seem to go beyond 2 TB (1.76 TB) I have read up on EFI and UEFI boot, and this seems to work in 2012, but I could not install any drivers for the motherboard... So, I also tried EFI for 2008R2, and this worked while installing the OS, it did not however work with the Windows Boot Manager option in the BIOS. It kept on freezing once it tries to load the partition. My idea was to allocate the complete 8 TB for the OS, and load a few VM's on there. I have now started with a new approach where I'll have a 256 GB OS Partition, and a secondary 7.5 TB Data partition. Oh, and I also did a diskpart - convert disk to gpt whilst installing 2008R2. The whole disk was accessible, 7.6TB Can anyone please clarify that EFI/UEFI is meant for larger boot volumes? Bigger than 2TB. If I were to have an ideal situation where my OS is run on a SSD, 256GB, and I can attach the 8 TB drives as normal disk to the OS? I'm I correct in saying that if I wanted to boot from a 8TB partition, I would need to force the BIOS to boot from EFI? The limit for MBR is 2 TB as far as I know now... *FYI: The motherboard is EFI-ready

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Cannot connect to windows server by name over vpn connection

    - by ErocM
    I have a rented dedicated windows server on a public ip that is acting as a SQL Server and VPN server. I need to connect to this server via computer name to get replication in place. I cannot use an ip address due to this issue: So, due to this, we are going the VPN route. That is my primary issue: After I am connected to this server's vpn, I can connect to SQL Server using the ip address but I cannot connect by the computer's name as you can see below... Right now, there is no hardware firewall on it since I had it removed to test this issue. I am running Windows 2008 Enterprise Server as the VPN server. I am not sure if the route print will help any from the workstation trying to connect but here is the info: IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.0.0.0 10.0.0.1 10.0.0.2 21 10.0.0.2 255.255.255.255 On-link 10.0.0.2 276 Any other info needed? Thanks for the help! ========= CLARIFICATION ON A FEW THINGS #1 ========= This is the server's info: This is the workstation that is trying to connect: I connect to the server via "Control Panel\Network and Internet\Network and Sharing Center\Connect or Disconnect" You can see here that I am connected: ========= CLARIFICATION ON A FEW THINGS #2 ========= I've tried to connect directly to the Sql Server as I did above but with the computers name and I couldn't get to it. Here I am trying to net view it from the workstation and it couldn't find it:

    Read the article

  • System Center 2012 VMM UI is very slow

    - by Grant
    I've recently setup system center 2012 a new server 2008 r2 server which I'm using for virtual machines. Everything seems to be working fine, and the virtual machines are nice and fast. But the Virtual Machine Manager interface is always excruciatingly slow. Sometimes taking up to 15 seconds moving between screens. It's very frustrating trying to use it when a task that just involves a couple clicks ends up taking several minutes. Pages that have a lot of form fields seem to take the longest to load - such as the page to change hardware settings of a virtual machine. Is this just normal performance for VMM? If not, where can I look to find what is slowing it down. Nothing else on the system seems to suffer. I can load and use Hyper-V manager with no noticable slowness. Even programs like event viewer that are usually rather slow seem to load fairly fast. Only the system center programs seem slow. Server is a Dell R710, 2x16 core opteron 6274 processors, 96GB RAM. OS drive is 2x500GB 7.2k RPM SAS drives in RAID1 (opted for the less expensive 7.2k drives since pretty much everything is stored on the SAN). Am I just being impatient? Does anyone else use VMM 2012 and find it slow?

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >