Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 624/3516 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Can I use @table variable in SQL Server Report Builder?

    - by edosoft
    Using MS SQL 2008 Reporting services: I'm trying to write a report that displays some correlated data so I thought to use a @table variable like so DECLARE @Results TABLE (Number int, Name nvarchar(250), Total1 money, Total2 money) insert into @Results(Number, Name, Total1) select number, name, sum(total) from table1 group by number, name update @Results set total2 = total from (select number, sum(total) from table2) s where s.number = number select from @results However, Report Builder keeps asking to enter a value for the variable @Results. It this at all possible?

    Read the article

  • Word Document Turns to Read-Only

    - by Psycho Bob
    I am running into an issue with a user whose Word document is somehow turning itself into Read-Only. The user is using Word 2003 and is accessing a document that is in a Server 2008 share. The document itself starts out as a normal, editable document (user has Full Control permissions), and the user is able to save and do the 'normal' things you would do to a document. However, after a couple of saves, the document turns to Read-Only (according to the title bar) even though the Read-Only attribute is not checked on the document's properties. Here is some additional information about the situation: *User has approximately 5-8 Word documents open at a time *User saves the document frequently (sometimes at a frequency of once per minute) *Once the document is closed it will open as a normal document if reopened *When the document does turn to Read-Only the user will do a "Save As" on the document and save it as FILENAME # where # is some increment of how many times this has happened (some documents are up to their 30th iteration) I understand that there is probably some room for user education here and that they could just be copying the RO document to a new one, closing and opening the RO doc, then copying all the information back. However, I would like to get to the route cause of the problem and try to stop it from happening in the first place. UPDATE: Apparently the reinstall did not fix the issue. I researched the issue a bit more and found that disabling the background save may take care of it, but I haven't had a chance to try it yet. Does anyone else have any other ideas?

    Read the article

  • Is it safe to convert varchar and char into nvarchar and nchar in SQL Server?

    - by Svish
    We currently have a number of columns in the database which are of type varchar. The application that uses them are in C# and it uses Linq2Sql for the communication (or what to call it). We would like to support unicode characters, which means we would have to convert the varchar columns into nvarchar. Is this a safe operation? Is it just a matter of changing the column type and updating the dbml file, or are there more stuff that needs to be done? Any changes in the C# code? Do I need to somehow convert the text that already exist in the database manually, or is it handled for me?

    Read the article

  • Win2008: Boot from mirrored dynamic disk fails!

    - by Daniel Marschall
    Hello. I am using Windows Server 2008 R2 Datacenter and I got two 1.5TB S-ATA2 hard disks installed and I want to make a soft raid. (I do know the disadvantages of softraid vs. hardraid) I have following partitions on Disk 0: (1) Microsoft Reserved 100 MB (dynamic), created during setup (2) System Partition 100 GB (dynamic) (3) Data partition, 1.2TB (dynamic) I already mirrored these contents to Disk 1. Its contents are: (1) System partition mirror, 100 GB (dynamic) (2) Data partition, 1.2 TB mirror (dynamic) (3) Unusued 100 MB (dynamic) -- is from "MSR" of Disk 0, created during setup. Since data and system partition are mirrored, I expect that my system works if disk 0 would fail. But it doesn't. If I force booting on disk 0: Works (I get the 2 bootloader screen) If I force booting on disk 1 (F8 for BBS), nothing happens. I got a blank black screen with the blinking caret. I already made disk1/partition1 active with diskpart, but it still does not boot from this drive. Please help. Both partitions are in "MBR" partition style. They look equal, except the missing "MSR" partition at the partition beginning (which seems to be not relevant to booting). Regards Daniel Marschall

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by user57999
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • Is SQL server the best DB for Storing and comparing images in database for a small ecommerce applica

    - by iecut
    I have been trying to create a small e-commerce web based application using MS Dot Net framework. The application will let the user allow to store the image of their product that they want to sell or purchase, then they will have the option to upload the image of a particular product and compare that image with the similar images in the database. So my two main concerns are: - Is MS SQL a good option to store and compare the images. - Is the any other better database that can do the same work with less complexity of work and that is also easy to integrate with MS dot net framework.

    Read the article

  • SQL Server: How to call a UDF, if available?

    - by Ian Boyd
    Most systems will have a user-defined function (UDF) available. Some will not. i want to use the UDF if it's there: SELECT Users.*, dbo.UserGroupMembershipNames(Users.UserID) AS MemberOfGroupNames FROM Users Otherwise fallback to the acceptable alternative SELECT Users.*, (SELECT TOP 1 thing FROM Something WHERE Something.ID = Users.UserID) AS MemberGroupNames FROM Users How do? My first attempt, using the obvious solution, of course failed: SELECT Users.*, CASE WHEN (OBJECT_ID('dbo.UserGroupMembershipNames') IS NOT NULL) THEN dbo.UserGroupMembershipNames(Users.UserID) ELSE (SELECT TOP 1 thing FROM Something WHERE Something.ID = Users.UserID) END AS MemberOfGroupNames FROM Users for reasons beyond me

    Read the article

  • How do I record streams in chunks on Flash Media Server.

    - by Vasil
    I want to record a stream which is published with Flash Live Encoder to FMS 3.5, but split the recording in files with predefined length. For example if a stream 'webcam' is published I want to record it in chunks of 10 minutes: 'webcam1.flv', 'webcam2.flv' ... From what I can tell there's no facility to work with timers. The only solution I could think of was using stream.record() with a time limit parameter but that seems like a hack because it triggers NetStream.Record.DiskQuotaExceeded on the stream when the recordin should stop and start recording another chunk. Has anyone done something similar?

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • What does really happen when we do a BEGIN TRAN in SQL Server 2005?

    - by Misnomer
    Hi all, I came across this issue or maybe something I didn't realize but I did a Begin Tran and had some code inside it and never ran a commit or rollback as I forgot about it. That caused all many of the database queries or even just a simple select top 1000 command were just sitting on loading..? Now it probably has put some locks on the tables I guess since it did not let me query them..but I just wanted to know what exactly happened and what are the practices to be followed here ?

    Read the article

  • XCOPY access denied error on My Documents folder

    - by Ryan M.
    Here's the situation. We have a file server set up at \fileserver\ that has a folder for every user at \fileserver\users\first.last I'm running an xcopy command to backup the My Documents folder from their computer to their personal folder. The command I'm running is: xcopy "C:\Users\%username%\My Documents\*" "\\fileserver\users\%username%\My Documents" /D /E /O /Y /I I've been silently running this script at login without the users knowing, just so I can get it to work before telling them what it does. After I discovered it wasn't working, I manually ran the batch script that executes the xcopy command on one of their computers and get an access denied error. I then logged into a test account on my own computer and got the same error. I checked all the permissions for the share and security and they're set to how I want them. I can manually browse to that folder and create new files. I can drag and drop items into the \fileserver\users\first.last location and it works great. So I try something else to try and find the source of the access denied problem. I ran an xcopy command to copy the My Documents folder to a different location on the same machine and I still got the access denied error! So xcopy seems to be denied access when it tries to copy the My Documents folder. Any suggestions on how I can get this working? Anyone know the reason behind the access denied error?

    Read the article

  • Debian, 2 NICs load-balancing or agregating with one same gateway

    - by pouney
    Hi, I have one server, with double NICs connected to one switch with the same gateway. Behind the switch we have internet. |Debian| - eth0 - switch - internet - eth1 - same I don't understand how to load-balancing between eth0 and eth1. The inbound/outbound traffic always use eth1. This is the config: # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.248.82 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 allow-hotplug eth1 auto eth1 iface eth1 inet static address 192.168.248.83 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth1 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth0 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth0 Ips aren't real, it's just for the example. Anybody have an idea on correct routing to use eth0 on 192.168.248.82 and eth1 on 192.168.248.83 ? I have many example for multiple gateway but here it's the same. Thanks all. Regards

    Read the article

  • How do I install php 5.3 on CentOS?

    - by fivelitresofsoda
    Hi, I have to install php5.3 on my centos server. If i do yum install php, the base repo installs 5.1.6 which is too old for the apps i need to install. So i've been trying to use the ius repository, following the official instructions from ius: root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-2.ius.el5.noarch.rpm root@linuxbox ~]# wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm root@linuxbox ~]# rpm -Uvh ius-release*.rpm epel-release*.rpm Ok. Now i simply do yum install php53, etc for all i need... but i get this error: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /usr/bin/php from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/bin/php-cgi from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /usr/share/man/man1/php.1.gz from install of php53u-cli-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-cli-5.1.6-27.el5_5.3.x86_64 file /etc/php.ini from install of php53u-common-5.3.4-3.ius.el5.x86_64 conflicts with file from package php-common-5.1.6-27.el5_5.3.x86_64 Error Summary ------------- I have no idea how to solve this. I think i have to delete the base packages however as a linux noob i don't know how to do that. Please help. Thank you.

    Read the article

  • Disable RAID to JBOD in server IBM x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • Any good GUI tool to easily create stored procs SQL server 2008 ?

    - by Munish Goyal
    The templates for stored-procs in SSMS do not auto-populate all input columns, again there is manual work involved. I am looking for something like right-click on table and say CREATE stored-proc, and then it allows to pick a template, based on which it can populate the parameters etc. and give check-box in GUI (like table designer, you can easily add/remove a column). Some support for change management with table undergoing alter or otherwise would also be helpful. Currently we manually write all stored-procs, which i think we should be able to save time and labor with automation. Any suggestion on other free 3rd party tools ?

    Read the article

  • What sql server isolation level should I choose to prevent concurrent reads?

    - by Brian Bolton
    I have the following transaction: SQL inserts a 1 new record into a table called tbl_document SQL deletes all records matching a criteria in another table called tbl_attachment SQL inserts multiple records into the tbl_attachment Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment. Would Read Committed Isolation be the correct isolation level?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >