Search Results

Search found 89127 results on 3566 pages for 'backup server'.

Page 109/3566 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • sql server 2008 express one row write problem

    - by bojanskr
    Hi everyone, I have the most bizarre problem(at least it is bizarre to me) with MSSQL Server Express 2008. The problem is the following: On the development machine I use MS SQL Server 2008 Enterprise....I get some data from a WCF service and write that data to the db (simple as it can be)....I should point out however that the writing, it is done in a separate thread. BUt, anyway no problems during development...all the data is there. Then I set everything up(connection strings .\SQLEXPRESS, other settings) build in Release and copy that to a test machine that has MS SQL Server Express installed(because my application is a client application and it should work with Express)...I run the program....the program retrieves the data from the service...and when I look at the database...I'm in for a big suprise...there's only one row written(the first row received from the WCF service). I would really appreciate any help...I'm in a deadlock here. Thanks in advance. Bojan

    Read the article

  • Question How to integrate SQL Server Express with VS C# Express

    - by paul
    I have just installed VS C# Express 2008 which includes SQL Server Express 2008. It all went ok and I can see VS C# and SQL Server in the list of installed products. When I start VS C# it looks fine but in the DB Explorer / Data Conection context menu the option 'Create new SQL Server Database' is disabled. I have uninstalled all VS products and reinstalled but the problem remains. Do I need to do anything else? Can anyone help? Thanks

    Read the article

  • SQL Server Compact 'Data Directory' macro in Connection String - more info needed

    - by codeulike
    So, as described on this msdn page, when you define a Connection String for SQL Server Compact 3.5, you can use the "Data Directory" macro, like this: quote from this msdn page: Data Directory Support SQL Server Compact 3.5 now supports the Data Directory macro. This means that if you add the string |DataDirectory| (enclosed in pipe symbols) to a file path, it will resolve to the path of the database. For example, consider the connection string: "Data Source= c:\program files\MyApp\Mydb.sdf" When using Data Directory, you can instead use the following connection string: "Data Source = |DataDirectory|\Mydb.sdf" For more information, see How to: Deploy a SQL Server Compact 3.5 Database with an Application. However, the 'for more information' link on msdn doesn't actually give any more information. So my question is: How does the |Data Directory| macro translate at run time? For WinForm apps, it seems to just give the location of the executable. Or is it more complicated than that?

    Read the article

  • How to synchronize SQL Server 2008 database with SQL Server 2005 database?

    - by James McFarland
    I am using VS 2008 Team Suite and SQL Server 2008 in my development environment. I am deploying to a shared-host website with shared-host SQL Server 2005. I want to push changes from my development environment to my production host. I tried using Data | Schema Compare... and it reports to me that it does not support SQL Server 2008. What do people use for this (Besides Red-Gate tools - I use those at my day job, and they rock...this is a volunteer thing for my son's school)? I am looking for something very inexpensive if not free.

    Read the article

  • SQL Server Agent 2005 job runs but no output

    - by alimack
    Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though). The job steps are: 1) Delete all rows from table; 2) Use For each loop to fill up table from Excel spreasheets; 3) Clean up table. I've tried this [MS page][1] (steps 1 & 2), didn't see any need to start changing from Server side security. Also SQLServerCentral.com for [this page][2], no resolution. How can I get error logging or a fix? Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming. I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?

    Read the article

  • Export view data programmatically in Access/SQL Server

    - by andy
    We have an Access application front-end connected to a SQL Server 2000 database. We would like to be able to programmatically export the results of some views to whatever format we can (ideally Excel, but CSV / tab delimited is fine). Up until now we've just hit F11, opened up the view, and hit File-Save As, but we're starting to get results with more than 16,000 results, which can't be exported. I'd like some sort of server side stored procedure we can trigger that will do this. I'm aware of the sp_makewebtask procedure that does this, however it requires administrative rights on the server, and for obvious reasons we can't give that to everyone. Any ideas?

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Sql server execute permission; failure to apply permissions

    - by WestDiscGolf
    I've just migrated from SQL2000 to SQL2008 and I have started getting an execute permission issue on a stored proc which uses sp_OACreate. The rest of the system works fine with the db login which has been setup and added to the database. I've tried: USE master GO GRANT EXEC ON sp_OACreate TO [dbuser] GO But this fails with the following error: Msg 15151, Level 16, State 1, Line 1 Cannot find the user 'dbuser', because it does not exist or you do not have permission. I'm logged into the server as sa with full permissions. I can execute a similar sql statement and apply the permissions to a server role, however not a login/user. How do I apply the changes to the specific user/login? I can apply the permissions to the public role and it resolves my issue; however this seems to be a security issue to me which I don't really want to apply to the live server.

    Read the article

  • Implementing a server side push for a small number of clients

    - by Helper Method
    For an web application I am working on I have the following requirements: Clients need to be able to log in via a web brower. After logging in, they will be able to change configurations (normal request/response) will be able to receive alarms sent by the server (a server side push) Now, the question is how to implement the alarms. I first thought of using some long polling approach (Comet), but as the amount of clients will definitely belimited to 5-10, I'm now thinking to go with a simpler approach. What are the options I have? Would it be okay to just let the clients poll the server? Important aspects are: Alarms should be delivered in (nearly) real-time. Alarms must not get lost (a lost alarm could cause harm to real people).

    Read the article

  • SQL Server column level security

    - by user46372
    I think I need some pointers on security in SQL Server. I'm trying to restrict some of our end users from getting access to certain columns (i.e. SSN) on a table. I thought I could just use column level security to restrict access to the columns. That successfully prevented users from accessing the table directly, but I was surprised that they could still get to those columns through a view that accessed that table. I followed the tips here: http://www.mssqltips.com/sqlservertip/2124/filtering-sql-server-columns-using-column-level-permissions/ Those were very helpful, but when I created a view at the end, the intern was able to access that column by default I've read that views are the best way to accomplish this, but I really don't want to go through and change all of the views and the legacy front-end application. I would rather just restrict it once on the table and if a view tries to access that column it would just fail. Is that possible or am I misunderstanding how security works in SQL Server?

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • Windows Server 2008 Task Scheduler: Task Started (Task=100) but did task did not complete (Task=102) when the result code is 2

    - by MacGyver
    Can someone give me a use case for setting up a Windows Server 2008 Task Scheduler task (we'll call this "test") that completes (action completed is task=201) with an error (result code=2)? This is event trigger code for another task (called "notification" that sends out an email based on the event history of the "test" task. I've got use cases for tasks that opens a program successfully and when a program fails to find the program. I'm just trying to think of how I can test a scenario when it finds the program, but something fails with warnings or errors. /* Failed - task started but had errors (result code of 2) */ <QueryList> <Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational"> <Select Path="Microsoft-Windows-TaskScheduler/Operational"> *[ System [ Provider[@Name='Microsoft-Windows-TaskScheduler'] and (Level=0 or Level=1 or Level=2 or Level=3 or Level=4 or Level=5) and (Task = 201) ] ] and *[ EventData [ Data [ @Name='TaskName' ]='\Tasks\test' ] ] and *[ EventData [ Data [ @Name='ResultCode' ]='2' ] ] </Select> </Query> </QueryList>

    Read the article

  • Restoring an Ubuntu Server using ZFS RAIDZ for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAIDZ disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAIDZ array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAIZ array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • Yes, you can benefit from both data and backup compression

    - by AaronBertrand
    Earlier today, MSSQLTips posted a backup compression tip by Thomas LaRock ( blog | twitter ). In that article, Tom states: "If you are already compressing data then you will not see much benefit from backup compression." I don't want to argue with a rock star, and I will concede that he may be right in some scenarios. Nonetheless, I tweeted that "it depends;" Thomas then asked for "an example where you have data comp and you also see a large benefit from backup comp?" My initial reaction came about...(read more)

    Read the article

  • Create Device Reccieve SMS Parse To Text ( SMS Gateway )

    - by Chris Okyen
    I want to use a server as a device to run a script to parse a SMS text in the following way. I. The person types in a specific and special cell phone number (Similar to Facebook’s 32556 number used to post on your wall) II. The user types a text message. III. The user sends the text message. IV. The message is sent to some kind of Device (the server) or SMS Gateway and receives it. V. The thing described above that the message is sent to then parse the test message. I understand that these three question will mix Programming and Server Stuff and could reside here or at DBA.SE How would I make such a cell phone number (described in step I) that would be sent to the Device? How do I create the device that then would receive it? Finally, how do I Parse the text message?

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Web Hosting Backup/Disaster Recovery Plan - Which Company?

    - by Harry Muscle
    I've been asked to look after consolidating all of our various company websites onto one host and also provide a disaster recover plan in case the chosen host goes down/out of business/etc. We're most likely going to go with HostGator as our chosen host, however, I'm not sure who to pick for our backup host. HostGator uses cPanel and has the functionality to provide regular full (ie: including configuration) backups of all the sites we host. Ideally I'm looking for a solution where we can provide these backups to another company and within a short period of time they restore all the sites onto their servers and we're back up and running. The whole disaster recover process has to be fairly straight forward from the point of view of what we need to do in case I am unavailable to assist in the disaster recovery process and no one else overly technical is available to assist (ie: take these backup files, send them to this company, and ask them to do this). Any suggestions on which company would be a good choice for this backup solution would be highly appreciated. Thanks, Harry

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • How do I log back into a Windows Server 2003 guest OS after Hyper-V integration services installs and breaks my domain logins?

    - by Warren P
    After installing Hyper-V integration services, I appear to have a problem with logging in to my Windows Server 2003 virtual machine. Incorrect passwords and logins give the usual error message, but a correct login/password gives me this message: Windows cannot connect to the domain, either because the domain controller is down or otherwise unavailable, or because your computer account was not found. Please try again later. If this message continues to appear, contact your system administrator for assistance. Nothing pleases me more than Microsoft telling me (the ersatz system administrator) to contact my system administrator for help, when I suspect that I'm hooped. The virtual machine has a valid network connection, and has decided to invalidate all my previous logins on this account, so I can't log in and remotely fix anything, and I can't remotely connect to it from outside either. This appears to be a catch 22. Unfortunately I don't know any non-domain local logins for this virtual machine, so I suspect I am basically hooped, or that I need ophcrack. is there any alternative to ophcrack? Second and related question; I used Disk2VHD to do the conversion, and I could log in fine several times, until after the Hyper-V integration services were installed, then suddenly this happens and I can't log in now - was there something I did wrong? I can't get networking working inside the VM BEFORE I install integration services, and at the very moment that integration services is being installed, I'm getting locked out like this. I probably should always know the local login of any machine I'm upgrading so I don't get stuck like this in the future.... great. Now I am reminded again of this.

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • TFS Backup Plan Wizard Tool

    - by Enrique Lima
    With the release of the “September – 2010” TFS 2010 Power Tools, came an addition to the Team Foundation Server Administration Console.  This addition is the Team Foundation Backups Tree item.  The tool is used to create backup plans and to work with it you run through a wizard, just like you would in configuring TFS or any of the extensions it has. The areas covered through the tool include: Backup to a Network Backup Path, retention configuration. Under Advanced Options, the extension to be used for the Full and Transactional backups. The capability to include external databases, meaning, include the reporting databases and SharePoint databases as part of the plan. There are further options as you can see, that includes being able to define a task scheduler account, be able to set alerts for notifications on execution of the plans, and last the option to configure the schedule for the plan execution.  All in all a very good tool and great way to safeguard the investment you’ve made.

    Read the article

  • Backup and Recovery in Exadata environments

    - by Javier Puerta
    As with any infrastructure every Engineered Systems customer needs a Backup & Recovery solution for Data Protection. See a detailed presentation and learn about the challenges of backup & recovery and the key benefits of the ZFS Storage Applicance as a backup device for Exadata & Sparc SuperCluster. (You need to be a registered member of the Exadata Partner Community to access link above. Otherwise you will get an error. You can register here)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >