Search Results

Search found 3710 results on 149 pages for 'databases'.

Page 87/149 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • What is the most reliable way to copy access front end files to client PCs

    - by Funky Si
    I have several in house databases which have access 2003 front ends, either adp or ade files. I need to copy these from my server to every client machine. In the past I have used a rollout scripts to copy the files to the all users desktop folder. I have since adapted this to also copy files to the public desktop folder since we started having windows 7 client machines as well as XP. The problem is that some of the time these scripts don't work for windows 7. Is there a better way of copying these files to a mix of windows 7 and XP clients or is using rollout scripts the best way?

    Read the article

  • Could SQL Server 2008 replication be used with NLB to allow unlimited scaling of reporting servers?

    - by John Keranos
    We are currently using transactional replication in SQL Server 2008 to keep a secondary reporting server synchronized with a primary database server. This has been working weel and keeps some of the load off the primary server. Would it be possible to scale this solution to multiple reporting servers? We're expecting an increased load of read-only queries and it would be nice to be able to add reporting servers as needed. The general idea was the following: Each reporting server would use a "pull" subscription to get the data from the primary database publication. These reporting databases could be a couple of minutes behind the primary server without it being an issue. The reporting servers would be NLB'd together. All read-only queries would be directed to the NLB which should spread the load across the servers.

    Read the article

  • WMI Notfication and database mirroring

    - by user22215
    Hi all I'm having a problem configuring a WMI alert that I would like to use with database mirroring. I'm running on Windows 2008 Enterprise X64 with Server 2008 Enterprise X64 also SQL Server has SP1 installed. Basically I click on alert select WMI after that I typed the below SQL statement SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE DatabaseName = 'testmove' AND State = 8 I have also made sure the service broker is enabled for the msdb and all mirrored databases however I still can't get this to work basically the alert never fires. I'm testing with just the alert functionality I have not even added in the agent job yet. I tested this by right clicking on my mirrored database and forcing it to fail over. Any help with this problem would be much appreciated

    Read the article

  • Which SQL Server edition?

    - by StaringSkyward
    We need a new install of windows server and sql server to replicate a couple of databases to a geographically separate location from an existing application (over a site-to-site VPN). The source database is SQL Server 2005. However, this is a temporary solution since the client is aiming to implement a different system entirely, so we are looking to find the minimum specification of both windows server and sql server to do this. We are finding the SQL server features per edition and licensing a little difficult to understand, hence the question. Am I correct in thinking that we can replicate data using transactional replication from SQL Server 2005 to 2008 web edition and we can install sql server web edition on windows 2008 web edition also? Thanks.

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • SQL 2008. I have user in a db which has no login on the server. How is it possible?

    - by Boppity Bop
    I am talking about windows authentication. I dont have access to the server adming rights but a dbadmin sent me screenshot where my user is not in the logins of the server. and also there is only one windows group called admin - databases which I am 100% sure my guy cannot be part of it. BUT... his username is in users of my db... How come user can appear in a db not having login on the server? P.S. in the logs it prints: Login failed for user 'xxxx'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors

    Read the article

  • In place SQL 2008 upgrade vs. Side by side?

    - by Jim
    I have a SQL 2005 Std edition server with 5 databases in production, 4 db's are used by web-based apps the 5th is a desktop application. My question is should I perform an in-place upgrade or a side-by-side by creating an sql2008 instance on the same box? The machine is a VM on vmware and I'm planning on taking a snapshot before the upgrade and having a 'blackout' window during the upgrade so that I could roll back to the snapshot if things go really bad. Any previous experience and advice is appreciated.

    Read the article

  • PostgreSQL, update existing rows with pg_restore

    - by woky
    Hello. I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes. So I came up with this script: [...] pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \ pg_restore -a -U user2 -d dbname2 [...] The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY. Thanks for your advice. (Previously posted on stackoverflow.com, but IMHO this is better place for this question).

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • How much does SQL Server Web Edition cost? + other related questions

    - by Goma
    Hello. I visited Microsoft pricing page about SQL Server databases, but it was not that clear for me. I want to know the exact cost of SQL Server Web Edition. Furthermore, I would like to know how can I get it if I am with VPS hosting? Should I install it by myself or will they install it for me? And finally, is there a web host that provide SQL Server Web edition so I pay for them directly with the hosting package?

    Read the article

  • Tools for tracking disk usage

    - by Carey
    I manage a number of linux fileservers. These all run applications written from 0-10 years ago. As sometimes happens, a machine will come close to, or run out of disk space. Reasons include applications not rotating log files, a machine with 500GB of disk producing 150GB of new files every month that were not written to tape, databases gradually increasing in size, people doing silly things...generally a bit of chaos. Anyway, when a machine unexpectedly goes from 50% to 100% full in a couple of hours, I figure out what broke (lots of "du") and delete files or contact someone. I also can look at cacti graphs to figure out what the machine's normal disk usage is (e.g. for /home). Does anyone know of any tools that will give finer grained information on historial usage than a cacti/RRD graph? Like "/home/abc/xyz increased 50GB in the last day".

    Read the article

  • Export Import error 'SSIS Data Flow Task could not be created' ... registering DTSPipeline.dll, cannot create task "STOCK:PipelineTask"

    - by Moin Zaman
    I'm about to throw in the towel on this one. Running SQL Server 2008 enterprise on Windows 7 x64. Can't get past this issue. When I try to Import / Export Data from databases through SQL Server Management Studio I get the following Error. Error: TITLE: SQL Server Import and Export Wizard ------------------------------ The SSIS Data Flow Task could not be created. Verify that DTSPipeline.dll is available and registered. The wizard cannot continue and it will terminate. ------------------------------ ADDITIONAL INFORMATION: Cannot create a task with the name "STOCK:PipelineTask". Verify that the name is correct. ({0194F10C-9860-4A4F-AF8B-DE7EFD89859F}) I have tried many solutions found via Google, but none of them have worked. A side issue that may be related is when I try to create an Integration Services Project in Business Intelligence Studio I get a 'project creation failed' error.

    Read the article

  • mysqld stopped working..can't restart...need help?

    - by grant tailor
    i was just checking somethings and noticed mysqld is not running in parallels power panel control panel...but my websites on the server were all working fine, which use mysql databases...so really strange So i tried to restart mysqld but got errors and can't restart and now all my websites are all offline now saying error connecting to database. logged in as root and tried /etc/init.d/mysqld start and got this error ERROR! Manager of pid-file quit without updating file What do i do next? What do i do? Please help!

    Read the article

  • MySQL - complete server migration (Ubuntu) [closed]

    - by Mr A
    Possible Duplicate: How to copy and move mysql database Dump all databases with SSH access I'm setting up a new dev machine, and I have the old one sitting right next to me. I'd like to do an exact copy of all MySQL structures and data from the old machine to the new. Nothing fancy needs to happen (it's a dev machine). No replication. I don't care about "downtimes" etc. Is there a super simple way to do this? For example, I have SSH on the old server, can I just use Nautilus, do a connect to server, and then transfer a folder over, replacing another folder with it and be done? It's the same version of MySQL on both sides. Same version of Ubuntu. Same in most respects.

    Read the article

  • Multiple SQL Standard Instances on 4 Processor/32-core Server

    - by Theowood
    We have a large 4 processor/32-core server with 192GB of memory available in the data center and over twenty small SQL Standard databases to consolidate. They are a mix of SQL 2012 and 2008 R2 for 3rd-party apps. Is there any issue with simply installing two instances of SQL Standard on the server - one for 2012 and one for 2008 R2 ? Each instance will use up to 64GB out of the 192GB and 16 cores. If we did this with Enterprise, the licensing would be a fortune and the Enterprise features are not needed.

    Read the article

  • how to include error messages into backup reports for SQL Server 2008 R2?

    - by avs099
    Right now I have daily (differential) and weekly (full) backups set on my SQL Server 2008 R2 as jobs for SQL Server Agent with email notifications if job fails. I do get emails like this: JOB RUN: 'Daily backup.Diff backup' was run on 4/11/2012 at 3:00:00 AM DURATION: 0 hours, 0 minutes, 28 seconds STATUS: Failed MESSAGES: The job failed. The Job was invoked by Schedule 9 (Daily backup.Diff backup). The last step to run was step 1 (Diff backup). but often that happens because we delete/create new databases - and diff backup fails. And the only way for me to see the actual reason is to go to Log Viewer - Maintenance Plans logs. Is it possible to include "Error Message" field from the logs into notification emails? And more generic - is it possible to change notification email templates somehow?.. Thanks you.

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Recommendations for SSD for server and database use?

    - by Tony_Henrich
    SSDs are a new technology and they are constantly improving. A lot of the posts here were posted in 2009 when SSDs where less mature and not as fast. What was recommend back then is probably out of date today because of better options. The SSD is used to hold SQL Server databases. Size is probably 128G. The database is used with a CMS and web server so web pages need to get their data and render as fast as possible. Which modern SSD is recommended for such a use? Is there an SSD better than Intel X-25 E/M in terms of performance/cost? (I am also evaluating cost between : RAM + UPS (semi persistent) vs SSD for same amount of gigabytes. No RAID is involved)

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • macports apache,php,db, how do I test on another device?

    - by brokenindexfinger
    My supervisor suggests using macports to install/manage different versions of apache and php, as well as both mysql and posgres databases. The idea is that we need to test our platform on different versions of each. So far I've just been using the default apache installation on osx lion, and the default postgres installation. My question is this: once I turn Web Sharing off, and proceed with a custom apache2 setup based in /opt/local/, how do I broadcast my machine's IP to other devices, for testing? With Web Sharing, I can get my machine's IP and use that to test with an iPad and iPhone. Will that still be the case, and if so, how do I do it?

    Read the article

  • Experience with Intel X25-M 160GB and Oracle

    - by derobert
    We're considering building an Oracle database with 12 Intel X25-M G2 160GB drives in software RAID10. It'd be running Linux. Database gets some very heavy write activity during the early morning data load, other than that it is mostly read-only (and the read load is fairly minimal). We're currently running on 11 150GB Velociraptors (also Linux software RAID10), and are hoping the X25-M will speed up the data load. We currently have redo on different disks than the rest of the data. I'm wondering a few things: Any experience with using X25-M drives for databases? The X25-E are unfortunately beyond our budget. Would it hurt to separate redo off to some magnetic (non-SSD) drives, say 2 (raid1) or 4 (raid10) Seagate Constellations?

    Read the article

  • How do I know if my disks are being hit with too much IO reads or writes or both?

    - by Mark F
    Hi All, So I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. But how do I really know what the max IO numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an IO bound problem. Is there a better way to predict when IO is reaching its limits? This article was interesting but not giving the answer I desire. "http://serverfault.com/questions/61510/linux-how-can-i-see-whats-waiting-for-disk-io". So is my best bet surrounding just looking at 'CPU IO WAIT'? There must be a more reactive method for this? Best, M

    Read the article

  • Apache httpd + FreeTDS hangs until restarted

    - by Jordan Reiter
    Every so often requests to a Linux server (say, linux.example.org) where the web app (Django) pulls in data from a SQL Server database via FreeTDS will hang. Requests on other servers pointing to the database still work, as do requests on linux.example.org that use local MySQL databases. Only the server plus FreeTDS appear to be affected. Restarting httpd makes the database connections work correctly again. What could cause this problem? Using: Centos 5.9 freetds 0.91 Apache httpd 2.2.3 /etc/obdc.ini: [DSN] Description = SQL Server 2005 Driver = FreeTDS ;Database = dbname Servername = SERVERNAME ;TDS_Version = 8.0 /etc/freetds.conf: [SERVERNAME] driver = /usr/lib64/libtdsodbc.so host = db.example.org port = 1433 tds version = 8.0 client charset = UTF-8

    Read the article

  • How can i simulate the production servers in my home for linux VMs [closed]

    - by user31
    I am thinking of making the small simulation of how the big companies run their system in my home environment to get the feeling. I have the server with 8GB ram , quad core processor. I am thinking of following setup if thats [possible because i have not worked with biger companies , so i want to know how can i do that I am thinking of creating 5 virtual machines VM1 will be database server and will have all databases like MySQL , postgreSQL , sqlite , mongodb and Oracle VM2 will be the web server and will have Apache and Tomcat installed VM3 will be the Filse server where i will have all the web sites file VM4 , i am thinking of as main box where i can install ptyon php java j2ee sites but not sure VM5 will have the server 22008 for c# .net applications my main idea is to be able to host the sites in php, python , java j2ee with spring Is my setup ok or i am missing few things. Please guide me with correct setup so that i can learn stuff

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >