Search Results

Search found 102772 results on 4111 pages for 'sql server 2008'.

Page 181/4111 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • update SQl table from values in excel

    - by user175084
    I am using the SQL Developer or SQl express. How do i get the values from an excel sheet and update those in a column of my database... Please help thanks. i have this and im running it but i get error: SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\books.xls', 'SELECT * FROM [Sheet1$]') i get error now OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" returned message "Could not find installable ISAM.". thanks

    Read the article

  • How convert sql query to linq-to-sql

    - by name1ess0ne
    I have Sql query: SELECT News.NewsId, News.Subject, Cm.Cnt FROM dbo.News LEFT JOIN (SELECT Comments.OwnerId, COUNT(Comments.OwnerId) as Cnt FROM Comments WHERE Comments.CommentType = 'News' Group By Comments.OwnerId) Cm ON Cm.OwnerId = News.NewsId But I want linq-to-sql query, how I can convert this to linq?

    Read the article

  • Dump Microsoft SQL Server database to an SQL script

    - by Matt Sheppard
    Is there any way to export a Microsoft SQL Server database to an sql script? I'm looking for something which behaves similarly to mysqldump, taking a database name, and producing a single script which will recreate all the tables, stored procedures, reinsert all the data etc. I've seen http://vyaskn.tripod.com/code.htm#inserts, but I ideally want something to recreate everything (not just the data) which works in a single step to produce the final script.

    Read the article

  • Converting SQL Query to LINQ 2 SQL expression

    - by Shyju
    How can i rewrite the below SQL query to its equivalent LINQ 2 SQL expression (both in C# and VB.NET) SELECT t1.itemnmbr, t1.locncode,t1.bin,t2.Total FROM IV00200 t1 (NOLOCK) INNER JOIN IV00112 t2 (NOLOCK) ON t1.itemnmbr = t2.itemnmbr AND t1.bin = t2.bin AND t1.bin = 'MU7I336A80'

    Read the article

  • How to use T-SQL MERGE in this case?

    - by abatishchev
    I'm new to T-SQL command MERGE so I found a place in my SQL logic where I can use it and want to test it but can't figure out how exatcly should I use it: IF (EXISTS (SELECT 1 FROM commissions_history WHERE request = @requestID)) UPDATE commissions_history SET amount = @amount WHERE request = @requestID ELSE INSERT INTO commissions_history (amount) VALUES @amount) Plase suggest the proper usage. Thanks!

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • 500 Internal Server Error after changing .NET Framework Version to 4.0 in IIS7

    - by René
    I just changed my .NET Framework Version of the Application Pools in IIS7 Manager, following these instructions. Now when I try to re-upload my ASP.Net page, it shows me a 500 - Internal server error. I have tried uploading it in .net 2.0(X86, X64, AnyCPU), and 4.0(X86, X64, AnyCPU), and everything gives the same error. This is all the details the error gives me: "There is a problem with the resource you are looking for, and it cannot be displayed." When keeping the .NET version on 2.0 on the server, it works just fine. Also, when uploading "index.htm", it works fine as well, it just shows the HTML page. This is on Windows Server 2008 R2, by the way. EDIT: I have finally found out how to get the error details. Here they are: "Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list." "Most likely causes: •Managed handler is used; however, ASP.NET is not installed or is not installed completely. •There is a typographical error in the configuration for the handler module list. Things you can try: •Install ASP.NET if you are using managed handler. •Ensure that the handler module's name is specified correctly. Module names are case-sensitive and use the format modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule"." I am sure that I have installed ASP.NET completely. Please help me, -René

    Read the article

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • Windows Server 2008R2 Virtual Lab Activation strategies?

    - by William Hilsum
    I have a ESXi server that I use for testing, however, I am often needing to create additional Windows Server virtual machines. Typically, if I do not need a VM for more than 30 days, I simply do not activate. However, I have been doing a lot of HA/DRS testing recently and I have had a few servers up for more than this time. I have a MSDN account with Microsoft and have already received extra keys for Windows Server 2008 R2. I am doing nothing illegal and I am sure if I asked, they would issue more - but, I do not want to tempt fate! I have got 3 different "activated" windows snapshots I can get to at any time. If I try to clone these machines, I get the usual "did you copy or move them VM" message. If I choose copy, as far as I can see, it changes the BIOS ID and NIC MACs which is enough to disable activation. If I choose move, it keeps the activation fine (obviously, I know to change the NIC MAC - I believe I can leave the BIOS ID without problems). However, either of these options keeps the same SID code for the computer and user accounts. After the activation period has expired, as far as I can see, all that happens is optional updates do not work - it seems that the normal updates work fine. Based on this, as you can easily get in to Windows when not activated without any sort of workaround, I was wondering if it is ok just to leave a machine un activated? (However, I obviously would prefer if it was activated!) Alternatively, how dangerous is it run multiple machines on a non domain environment with the same SID? I am just interested to know if anyone can recommend a strategy for me? I have only found one solution that deals with bypassing activation - I am not interested in doing anything remotely dodgy... at a stretch, I am happy to rearm (I have never needed to keep a server past 100 days), but, I would rather have a proper strategy in place.

    Read the article

  • Connecting to server from remote machine

    - by Jannat Arora
    I wish to connect my machine to a server in some other city. For doing the same I am using the following command: mstsc -v ip_address_of_server remote desktop can't connect to remote computer for one of these reasons: 1) Remote access to server is not enabled. 2) Remote computer is turned off 3) Remote computer is not available on network. Make sure remote computer is turned on and connected to the network, and that remote access is enabled. As per previous posts I need to turn off my client computers firewall..which I have...but still it gives me the same message. Can someone please please help me out...so as to how i may resolve this?? I am really new to networking, etc. Also when i am pinging: ping ip_address_of server I am getting the following response: Reply from ip_address_of_server: destination host unreachable Also I did try on ubuntu with rdesktop...still its not been able to connect with it. Also i know there are other people who are able to connect their machines with the server remotely. So i guess its not working for me only. Also when I accessed the same machine through LAN I was able to do so.

    Read the article

  • Gerrit ssh key setup on windows server

    - by hotpotato
    I am attempting to configure google's 'Gerrit' code review web app on a Windows server 2008 virtual machine on our internal network. We are using Apache Tomcat (6.0.36) to host the web app and have deployed the gerrit.war to tomcats webapp folder, setup the context.xml, web.xml etc for the web app correctly i believe. However when i startup Tomcat using the $CATALINA_HOME/bin/startup.bat i get the following message in the tomcat logs: *Dec 07, 2012 1:03:54 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class com.google.gerrit.httpd.WebAppInitializer com.google.inject.CreationException: Guice creation errors:* 1) No SSH keys under C:\Gerrit\config\etc while locating com.google.gerrit.sshd.HostKeyProvider at com.google.gerrit.sshd.SshModule.configure(SshModule.java:90) I have created a is_rsa.pub SSH key and placed it in the specified directory to no avail. I have been googling this for about a week now and can't seem to find any information about the file or format it is expecting... documentation on setting gerrit up on windows seems hard to come by! Can anyone provide useful information about how to correctly configure a host SSH key in this context?

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • Hiding subfolders from users with Windows Server security

    - by Frans
    Using Windows Server 2008. I would like to allow all users to map to a common network drive and be able to browse it. But, I only want them to be able to see the subfolders they actually have access rights to. Is this doable? Example I have a share with two folders on it; \\domain\share\FolderA \\domain\share\FolderB With three different security groups, I would like to map a network drive for all three to \\domain\share. However, for group1, I want them to only be able to see FolderA, group2 should only see FolderB and group3 should see both. I am not just talking about denying access to the actual folder, which is easy enough, I don't want the user to even be able to see that the folder exists. In other words, when group 1 logs in and do "dir n:\" they should see N:\FolderA When group 2 logs in, they should see N:\FolderB and when group 3 logs in they should see N:\Folder A N:\Folder B My half-baked solution If I completely block access to the root then I can't map a drive to it. I can give everyone the traverse right which then allows the user to map a drive. However, if a member of group1 or group2 tries to go to "N:\" they get an access denied error. If they go to N:\FolderA (for group1) then it works. So, that sort of works, but it would be nicer if the user could actually browse to N:\ and just only see the subfolders they have access to. I am pretty sure I have seen this done but not sure how to do it myself. Any advice would be greatly appreciated.

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Windows Server 08 R2 file share File locking, OSX clients

    - by Keith Loughnane
    I've spent the last two weeks banging my head against this wall. I think I'm starting to understand the problem though. I manage a design company and they have 5 macs (OSX 10.5/.6/.7) connected over SMB to a Windows 2008 R2 file server, another machine functions as Domain Controller (that might not matter). All the macs can connect ok, no issues finding the server or logging in. For the most part things are ok. The problem is files locking up. I thought it was a permissions issue at first but it seems to be file locking. The users open a file; .ind, .pdf etc the file opens, the software reads it and closes it. That's fine, but the folder above the folder locks, it can't be moved and it can't be renamed. Eg: /Working/Project01/Imagefiles/image.pdf /Finished/ The user opens image.pdf, closes it and wants to move the whole Project01 folder into Finished. It gives a username/pass dialogue and then does nothing, no error, or just does nothing. Trying to rename gives a dialogue that says you don't have permission. It looks like it's looking for permission locally, which is why I spent about a week looking at that. Eventually I found that Finder on the macs seems to be keeping the folders open. I can work around it by Killing finder, remounting the shared drive or closing the file through the server manager but this just proves the theory it's not a solution. Has anyone dealt with this problem?

    Read the article

  • Slow RDP after server joins domain

    - by Chris Grove
    We're having RDP issues with Amazon cloud servers that we recently joined to an Active Directory domain. The setup is: A local office network A virtual private cloud in Amazon An IPSec tunnel between the two networks A number of Windows 2008 R2 servers on both networks An AD domain (call it abc.net), with one domain controller in each network. The domain controllers are both new, fresh installs. Before we had the domain set up we had local accounts for the cloud computers which were used for RDP access. Our idea was to get all of the servers on to the domain so we could use domain logins instead of per-server local logins. Before the cloud servers were in the domain, RDP (from the office network or through a VPN to the cloud) worked great. After we joined the cloud servers to the domain, RDP from the office became very slow - a few minutes to log in, long frequent pauses when the interface is unresponsive, generally just a slow and frustrating experience. This is a problem regardless of whether a domain or local login is used for RDP. Oddly, when outside of the office network and connecting to the cloud directly with the VPN, RDP is still very responsive. Any idea why RDP from office to cloud is suddenly very slow after the cloud servers join the domain? What can I look at in our configuration to address this? Any help is greatly appreciated.

    Read the article

  • How can I tell why I have access to a file share on Windows Server

    - by Joel
    I have a file share on a Windows 2008 R2 server in a AD domain (call it \SECURESERVER\STUFF) and I am not sure if I have the share and folder permissions set up right. I noticed the problem when I set up new server (WORKGROUP\FOREIGNSERVER) that was not joined to the domain and tried to copy some files off of \SECURESERVER\STUFF. I was surprised to find that when I tried to access the files, it did not prompt me for a username and password and proceeded to give me full access to the files. That worried me so I tried the same thing on some workstations that were not in the domain and they did NOT have the same behavior (they did prompt for a username/password as desired/expected). So, I think there is something peculiar about FOREIGNSERVER. I am logging into it with a local admin account, but my domain and SECURESERVER should know nothing of this server. I've carefully gone through the share and folder permissions on the share but I can't find the reason that FOREIGNSERVER has access. How can I find out why FOREIGNSERVER has access to SECURESERVER?

    Read the article

  • Windows Server wbadmin recover with commas

    - by dlp
    I want to do a recovery of files with commas in their names from the command line, ala: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File, With Comma.txt,C:\Path\To\File 2, With Comma.txt" So there are two files: C:\Path\To\File, With Comma.txt C:\Path\To\File 2, With Comma.txt The problem is wbadmin assumes commas separates each file, so it sees 4 files specified instead of 2. I've tried putting a \ in front of commas that are part of the file names like so: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File\, With Comma.txt,C:\Path\To\File 2\, With Comma.txt" but it doesn't work, it just says there's a syntax error. The documentation on Technet doesn't seem to mention anything that'll help either. OS is Windows Server 2008 R2. A clarifying comment: I've changed the file names to be different than the actual names to be less revealing, but I also see I dumbed it down too much. The comma can occur either in the file name itself like C:\Path\To\File, With Comma.txt' or in the path to the file, like:C:\Path, To\Other\File.txt`.

    Read the article

  • Migrating a Windows Server to Ubuntu Server to provide Samba, AFP and Roaming Profiles

    - by Dan
    I'm replacing our old Windows XP Pro office server with a HP Microserver running Ubuntu Server 12.04 LTS. I'm not a Linux expert but I can find my way around a terminal prompt, I'm a Mac user by choice. The office use a mix of Windows XP Pro machines and OSX Lion laptops. I included Samba during installation, and I'm planning on using Netatalk for the AFP and Bonjour sharing. I'd quite like to have samba make the server appear in 'My network places' on the Windows machines the way Bonjour makes it appear in finder on the Macs, if this is possible? I want to get to a point so that a user logging into Windows, gets connected to the Ubuntu server (do they need an Ubuntu user account?) which get them their shares and their Windows user profile (though a standard profile across users would do). The upshot is to make centralised control of user accounts (e.g. If a person leaves, killing their account on the server stops their Windows logon and ability to access Samba shares) and to ensure files aren't stored on the individual machines for backup/security purposes. I want to make this as simple as possible, so don't want to have loads of stuff I don't need, I just can't figure out: What I need at the server end: - will Samba be enough (already installed as part of initial installation), or will I need to cock around with LDAP (and how does this interact with Samba) - For someone of moderate Linux competence like me, is there a package that offers easy admin of user accounts, e.g. a GUI like phpLDAPadmin (if LDAP is necessary) How to configure the XP machines: - do I need to have the XP machines set up as a domain controller (I've no idea, really) - roaming profiles looks to offer the feature of putting the user's files on the server rather than the machine itself along with a profile that follows the user from machine to machine. Syncing Mac user's home folders with the server This is less of a concern because I can set up Time Machine if it comes to it, but I'd appreciate any recommendations of what approach I should take having the Mac home folders synced to the server.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • Visual Studio 2010 is asking to convert RDLC created on VS2008 to RDLC 2008 format?

    - by Junior Mayhé
    I've created my project on Visual Studio 2008, as well RDLC files on it. But now, when I open the solution on Visual Studio 2010 and want to open RDLC file, it's showing me a warning. That's a little funny. The report was created on VS2008 and VS2010 is asking to convert to 2008 format. Perhaps there was a problem on my VS2008 installation that created RDLC files using some ancient format (2005??!) The problem is, when you confirm with Ok button, do some design ajustments and run the app, it throws an error on 'Main report': ex.InnerException {"The definition of the report 'Main Report' is invalid."} [Microsoft.Reporting.DefinitionInvalidException]: {"The definition of the report 'Main Report' is invalid."} Data: {System.Collections.ListDictionaryInternal} HelpLink: null InnerException: {"The report definition is not valid. Details: The report definition has an invalid target namespace 'http://schemas.microsoft.com/sqlserver/reporting/2008/01/reportdefinition' which cannot be upgraded."} Message: "The definition of the report 'Main Report' is invalid." Source: "Microsoft.ReportViewer.Common" StackTrace: " at Microsoft.Reporting.ReportCompiler.CompileReport(CatalogItemContext context, Byte[] reportDefinition, Boolean generateExpressionHostWithRefusedPermissions, ReportSnapshotBase& snapshot)\r\n at Microsoft.Reporting.StandalonePreviewStore.StoredReport.CompileReport()\r\n at Microsoft.Reporting.StandalonePreviewStore.StoredReport.get_Snapshot()\r\n at Microsoft.Reporting.StandalonePreviewStore.GetCompiledReport(CatalogItemContext context, Boolean rebuild, ReportSnapshotBase& snapshot)\r\n at Microsoft.Reporting.LocalService.GetCompiledReport(CatalogItemContext itemContext, Boolean rebuild, ReportSnapshotBase& snapshot)\r\n at Microsoft.Reporting.LocalService.CompileReport(CatalogItemContext itemContext, Boolean rebuild)\r\n at Microsoft.Reporting.WinForms.LocalReport.CompileReport()" TargetSite: {Microsoft.ReportingServices.ReportProcessing.PublishingResult CompileReport(Microsoft.ReportingServices.Diagnostics.CatalogItemContext, Byte[], Boolean, Microsoft.ReportingServices.Library.ReportSnapshotBase ByRef)}

    Read the article

  • Entire Table is pushed to the next page when rendering a SSRS 2005 Report (as .pdf) in SSRS 2008

    - by Pwninstein
    I have a SSRS 2005 report that I'm rendering in SSRS 2008 as a .pdf. The report contains (among other things) a table that's very simple: header row, details, no footer, no aggregation, no grouping, keep together = false, pageBreakAtStart = false, pageBreakAtEnd = false, repeatHeaderOnNewPage = true. I resized the table to be much narrower than the body of the report just to be sure it wasn't extending beyond the bounds of the report, pushing everything down. But, no matter what I try, if some of the detail rows in that table would need to be pushed to the next page, then the ENTIRE TABLE is pushed to the next page, not just the extra rows. So my question is: Is there a workaround for this problem, is this a known issue, or is it even possible to get this 2005 report to render properly in 2008? NOTE: this is related to a question that I previously asked here, and is based on this MSDN forum post started by a coworker. This question is not the same as my previous question, as I'd like to see things work properly in with a 2005 report. If it's not possible, that would be good to know, as it would indicate that we need to upgrade one of our servers to SQL 2008. Thanks!

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >