Search Results

Search found 30636 results on 1226 pages for 'database versioning'.

Page 540/1226 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Windows 2008 Server on VMWare (hardware)

    - by Bill
    I want to setup a single server to run a few virtual servers for our datacenter. I do not have a lot of money to spend so I am trying to gain bang for the buck. My budget is around $2,000. So I was thinking about building the following as the VMWare physical server: Intel iCore 7 950 (LGA1366, 4 cores,8 threads) Gigabyte GA-X58-USB3 LGA 1366 X58 ATX Intel Motherboard 24 GB of Viper II Series, Sector 7 Edition, Extreme Performance DDR3-1600 (PC3-12800) CL9 Triple Channel Memory VelociRaptor 300GB 10,000 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive I am planning on running the newest version of VMWare ESXi (64-bit). On these I am planning on running a few various servers: Windows 2008 Server R2 w/ IIS (several custom built ASP.NET Apps) Windows 2008 Server R2 w/ MS SQL 2008 Database Server Linux Web Server w/ Several WordPress Blogs (XAMPP?) Windows 2008 Server R2 w/ IIS (DEV ENVIRONMENT) Windows 2008 Server R2 w/ MS SQL 2008 Database Server (DEV ENVIRONMENT) In your opinion, will this hardware be sufficient to run the above load with room for possible 2-3 more virtual machines (probably lightweight web servers)?

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • Drupal7 doesn't detect MySQL on CentOS, but Wordpress3 does?

    - by jyaworski
    Hey guys. I'm running CentOS 5.5 here with Apache2, PHP5, and MySQL 5. My wordpress install on the same system runs perfectly, but the drupal7 install script only detects SQLite. The mysql module is enabled in php.ini, so that isn't the problem. Do you think it could be something with Drupal 7, or my PHP install? I tested it on localhost (I'm essentially running ArchLinux with Apache) and it installs just fine. I don't see a difference between my local php.ini and my server php.ini. I get this when accessing install.php on the server. SQLite The type of database your Drupal data will be stored in. Your PHP configuration only supports a single database type, so it has been automatically selected. Edit: The mysql PDO module is installed already.

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • LAMP Stack Version Help -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • Clean out a large MediaWiki text table

    - by Bart van Heukelom
    I just discovered that an old MediaWiki of mine was infested with spam, and the database table named "text" (which contains the page content) is 3GB large. I've deleted all the spam pages manually, but: The table is still the same size. I wonder how it got to 3GB anyway. There wasn't that much spam (about a hundred medium-sized pages) How can I get rid of this mess? If you want to inspect the wiki, it's over here. The database is MySQL 5.0.75.

    Read the article

  • I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2

    - by techsjs2012
    I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2. If I use the mysql command line to connect to the remote database it works great but if I use mysqladmin I am getting "#2002 Cannot log in to the MySQL server". I have found if I do a: setenforce 0 It will work from myphpadmin to my remote database but once I reboot or set Scientific Linux setenforce back to one it stops working again.. I know setenforce 0 is not the right thing to do but can someone please give me details steps on how to get this working the right way... thanks I am new to Scientific Linux and been having some issues.. thanks

    Read the article

  • Does TFS 2010 lock a project collection when it's being cloned?

    - by Hirvox
    We're planning to migrate a project collection currently hosted on TFS 2010 to TFS 2012. We want to keep the current installation running while resolving any issues that might arise, so we need to copy the current project collection to the new server. However, TFS doesn't allow us to attach a restored database backup directly. The database first must be detached from the original TFS installation. We can get around that limitation by cloning the project collection and detaching the clone, but we're not sure whether that would also impact the original project collection. Does TFS lock the original project collection while it's being cloned?

    Read the article

  • "The requested operation could not be completed due to a file system limitation" 3202

    - by user46529
    I backup SQL Server database and it fails BACKUP DATABASE dd TO DISK = '\backupServer\backups\dd.bak' WITH COMPRESSION, CHECKSUM, NOFORMAT, INIT , BlockSize = 65536 , BufferCount = 2200 , MaxTransferSize = 4194304 The backup size is 3TB and I have 6TB free space on bacup server. I am using backup parameters per SQLCAT whitepaper. Everything works ok when I backup to local HDD and it always fails when I backup to network share. After about 6 hours. Can't find why. Thank you. Yes. The backup over the network is fastest and saves me 3Tb of local disk space :) Thanks for pointing to the memory issue. I left 4Gb to OS and it worked!

    Read the article

  • Can you convert an address to a zip code in a spreadsheet?

    - by moe37x3
    Given a column of street addresses with city and state but no zip in a spreadsheet, I'd like to put a formula in a second column that yields the ZIP code. Do you know a way to do this? I'm dealing with US addresses, but answers pertaining to other countries are interesting, too. UPDATE: I guess I'm mostly hoping that there's a way to do this in Google Spreadsheets. I realize that you need to access a vast ZIP code database to do this, but it seems to me that such a database is already inside Google Maps. If I put an address in there without ZIP code, I get back an address with ZIP code. If Maps can do that lookup, maybe there's a way to make it happen in Spreadsheets, too.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • MS Access 2007 end user access

    - by LtDan
    I need some good advise. I have used Access for many years and I use Sharepoint but never the two combined. My newly created Access db needs to be shared with many users across the organization. The back end is SQL and the old way to distribute the database would be placing the db on a shared drive, connecting their PC ODBC connections to the SQL db and then they would open the database and have at it. This has become the OLD way. What is the best (and simpliest) way to allow the end users to utilize a frontend for data entry/edit reporting etc. Can I create a link through SharePoint and the user just open it from there. Your good advise is greatly approciated.

    Read the article

  • mysql server, open 'dead' connections

    - by Jeff
    my basic question is what kind of impact does this have on the server.. lets say for example, there is an older program in my company that opens connections to a mysql database server at a high rate (everything they do with the application basically opens a server connections) however, this application was not designed in the way to dispose of the connections after they where created.. alot of the time the connections remain open but are never used again, open 'dead' connections i guess you could say. they just remain connected until the server times them out, or until an admin goes in and removes the sleeping connections manually. im guessing this could be responsible for sometimes not able to connect errors etc. that we receive from other systems that try to access the mysql database? (connections limit reached) could this slow down the server as well? curious what all this could exactly cause. thanks!

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • Connect to my virtualbox mysql server

    - by WebweaverD
    I wonder if someone here could help me, this is my set up: I am on a windows 7 machine running a ubuntu virtualbox as my local web server and database server (mysql). I have just got hold of a copy of Komodo which i am running on my windows machine which I would like to hook up to my database. The fields it needs are hostname, port, socket, username and password. I know the username/password but am unsure how to find out what to put for the other fields. The ubuntu vb has an ip of 192.168.0.10, which is in my hosts file as http://swishprint.dev I hope I have asked this in the right place, any help much appreciated.

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • type mismatch errors querying data from spreadsheet

    - by user2984933
    In EXCEL 2010 I am trying to querying data in another spreadsheet. The data range in the source sheet/ file is named (DATABASE). The Date field in the database is formatted as short date and when I query the date without criteria I get a different format of European datesYYYY-MM-DD with time in the results. When I use criteria and a specific date in the date field criteria grid using English format MM-DD-YYYY I get results. When I set parameters looking at destination file cells for the date for the parameters, I get Type mismatch EVEN THOUGHT THE CELLS ARE Short date Formatted. This worked perfectly in my 2003 version of EXCEL. Now I am running Win 7 -64 and Office 2010 Pro. Why does the query throw Mismatch with cell references for the parameters but accepts hard value dates in any date format? (MSQRY32.EXE)

    Read the article

  • Why does yum index get corrupted?

    - by TomOnTime
    Occasionally yum's cache gets corrupted and we see errors like this: error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db3 - (-30974) error: cannot open Packages database in /var/lib/rpm The workaround is rm -f /var/lib/rpm/__db* and then the next "yum" command regenerates the data. My question is: what is likely to be causing this? Is there some common task that ignores locks or has other problem that causes this? We have hundreds of CentOS machines and there is no pattern to which see this problem. It could be a "one in a million" issue, which at large scale is seen often. NOTE: I realize this is a very "open ended" question, but if an answer finds the cause, I will go back and turn the question into something more canonical that directly relates to the specific issue.

    Read the article

  • Tunneling through SSH for 1521 port access?

    - by A T
    I am developing locally on my computer, using my own Apache server with PHP configured. My database however is remotely located on an Oracle 11g Database Server. We were also given a separate remote server for hosting our .html and .php files, however only FTP access has been provided there. Development is far too slow waiting for the FTP connection to push. So I decided to develop locally, but still use the remote DB server. Unfortunately that gives me an error. Not sure how—or where—to integrate tunnelling. Do I add something to the oci_connect HOST in my PHP file, or do I encapsulate my whole environment over SSH?

    Read the article

  • Windows, Apache and MSSQL Authentication

    - by user1114330
    I have a create database script written in perl. I remember it working just fine another machine. A couple years later using a Vista machine I am trying to use it again and it keeps failing. The main difference is that now I am using Apache instead of IIS. In the script the IUSR account is granted permissions as it needs to write to the database as a part of another program. IIS has been uninstalled on this machine but the IUSR account still exists. The NT AUTHORITY\IUSR is also seen in the logins drop down in MSSQL(2012). The machine is running Vista Home Edition. However when running the script I get errors that say that NT AUTHORITY\IUSR cannot be found. I tried also with COMPUTERNAME\IUSR just for the heck of it and of course it was not found. I also tried with IUSR alone and for some reason the user isn't being "found"? Any ideas?

    Read the article

  • Transaction log is full and does not free up space

    - by titanium
    Hi, I have a database in SQL Server 2005 whose transaction log becomes full. It is using snapshot replication. I noticed the transaction log is not freeing up space. So I created an additional transaction log. Three days has passed and this first transaction log is still full. I performed a full database backup and transaction backup. Then I tried to shrink the transaction log but the shrink failed. Can anyone advise why shrinking transaction log is failing? ANy other recommendation on how to resolve the problem?

    Read the article

  • Remove MySQL ibdata1 without dumping and restoring existing proper databases

    - by Halfgaar
    My MySQL server contains two 100+ GB big databases. One was created with innodb_file_per_table and one wasn't. The one that wasn't, has been dumped, ready to be reloaded. However, the ibdata1 file is still huge and I don't have enough free space. Normal advice in this situation is to dump and remove each database, stop MySQL, then remove ibdata1 and the transaction logs, and then reload the databases. My specific question is: can I leave databases that were created with innodb_file_per_table alone? Or will they be destroyed when I remove ibdata1, even though all their files are separate? I can't afford to take this database off-line to dump and reload it. And because it's already properly made with separate files per table, it would feel pretty useless.

    Read the article

  • How can I speed up a MySQL retore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >