Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 185/230 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Are there any layout comparison / vssetting sharing places?

    - by Wil
    Well, I reinstalled Visual Studio 2008 and did not have a backup of my vssetting file. I did not think it was that important as I had barely customised it, however it just doesn't feel right! The general windows feel correct but When I switch between views (source code, Windows forms, web editor), all the toolbars get muddled up. In the past few years, I have seen so many "post your desktop" type items and I could swear there there would be some "post your IDE", but after looking on Google and several other programming sites, I just can't find one! I don't want this turning in to a post your IDE unless others want to, but can anyone point me to a site where they have done this, or even better - are there any vssetting sharing places where you can download ones made by others?

    Read the article

  • Old solutions being recalled in VisualStudio

    - by user1437135
    I have an odd scenario that I can't figure out, and would appreciate any advice offered. I'm running VisualStudio 2010 pro. I have a web application solution with 6 projects. On one occasion I opened up some files from a number of backup solutions to look at some historic code. I viewed them and closed the files. I did this with my current project open. I may have rebuilt the solution with them open but I'm not sure. I recently did a 'find', searching the whole solution, and noticed that the files from the backups are referenced as being part of the solution. How do I remove them?

    Read the article

  • Are there any e-commerce websites that use NoSQL databases

    - by Saif Bechan
    I have read a lot lately about 'NoSQL' databases such as CouchDB, MongoDB etc. Most of the websites I have seen using this are mainly text based websites such as The New York Times and Source forge. I was wondering if you could apply this to websites where payment is a huge issue. I am thinking of the following issues: How well can you secure the data Do these system provide an easy backup/restore machanism How are transactions handled commit/rollback I have read the following articles that cover some aspects: Can I do transactions and locks in CouchDB? Pros/Cons of document based database vs relational database In these posts the aspect of transactions if covered. However the questions of security and backups is not covered. Can someone shed some light on this subject? And if possible, does anyone know of some e-commerce websites that have successfully implemented the document based database.

    Read the article

  • How do I use Mercurial?

    - by Derek
    I'm assuming Mercurial is for having an updated website and it archiving the old stuff? Easy to test things and such? My question is, how exactly should I get started and can somebody give me a crash course in using Mercurial and using the following techs below: Notepad++ for coding FTP PHP/MySQL Jquery & other js libraries I use windows and would like to keep things fairly simple. I'm developing 1 website currently and want some kind of CVS system in place. Or should I just stick to my current edit file in notepad++ and upload via ftp method and make a backup copy of everything every once and a while? Any thoughts? EDIT: I'm doing http://bugtracker.gttools.com/public/wiki/bluehost/Mercurial right now in order to try and 'install' it.

    Read the article

  • Restoring Sharepoint content database

    - by jude
    Hi, My WSS_Content database had got corrupt. And my pc was infected by virus. I had no backup of my WSS_Content database. So, I copied the corrupt database to a separete disk, formatted and reinstalled Sharepoint, with SQL Server 2005 as before (I'm using sharepoint 2007 ). I used Sytools Sharepoint Recovery tool, that i found on the net, which helped me restore my corrupt WSS_Content database. Now i want to set this content database as my "The content database" for my newly installed sharepoint. I tried the steps that i found in the link :- http://www.stationcomputing.com/scblogspace/Lists/Posts/Post.aspx?ID=40 I get stuck at step 3. Can anybody help me. I am really in a big mess. Would appreciate any help. Thanks, Jude Aloysius

    Read the article

  • Git is ignoring .git directories in subdirectories

    - by Danny
    I'm using git as a backup tool and 'roaming profile' for my $HOME directory between laptop and desktop. My problem is that under my $HOME I have a Development directory with multiple git projects I'm working on. Git will not allow me to add the subdirectories .git folders. So to commit to these projects I have to push the changes into my $HOME git repo, pull on laptop (where they were created and .git dir exsits) and commit. I've read about submodules, but it's not really what I want. I just want the children .git folders to be treated like any old directory so I can move them around and back them up. Has anyone done this or have an idea how I would?

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Choosing proper database for a few users application

    - by tomo
    Requirements: tiny WinForms client app (C# 4.0, WinForms or WPF) a few users working simultinausly no database service at all - the whole engine as *.DLLs inside client apps database available as shared folder on one computer at least simple concurrrency checks compatible with nHibernate or EntityFramework / NET 4.0 backup as simple as copying files from shared folder - assuming no running clients at the moment no stored procedures/triggers required data size - a few tables and a few thousands rows after 2 years Nice to have: user access rights encrypted data I'm trying to choose between: MS Access SqlLite SqlServer Compact Edition. Can you recommend which one should be the best for these requirements?

    Read the article

  • Is git suitable for one developer without server

    - by Shawn Mclean
    I am a single developer without another computer to backup my projects on. I'm looking into source controls and I came across git but all the setup tutorials are targeted to an external server. I used to use SourceGear Vault, but seeing that git is getting alot of attention, I might as well familiarize myself with it. I do not always have internet access. Is Git suitable for me? Can I be pointed in the right direction to set it up? Visual Studio 2008. Windows 7.

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors Asp.net 3.5 or 4.0 supported. Url Rewriter support GZip support (Dynamic through code) Initial Setup support (If required) SQL Server 2005 or 2008 Allow to access SQL Server DB using SQL Mgmt Studio Environment supporting Backup and Restore of DB on my own, without involving tech support team Full Text Search support FTP support I can able to send atleast 500 Emails daily. 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). Alert Email to be sent when they do any maintenance or during downtime. Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • How to import and export only data of whole database in access 2007

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be duplicate values(i guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • Best way to auto-restore a database every hour

    - by aron
    I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Red Gate Software has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that Red Gate Software is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • What mail storage should I choose for our web application; IMAP, key-valud store, rdbms, ...

    - by tvrtko
    I have to store e-mail messages for use with our application. I have "metadata" for all messages inside a relational database, but I don't feel comfortable keeping message content (gigabytes and terabytes of email data) inside a database. I'm currently using IMAP as a storage, but I have my doubts if I choose correctly. First of all there is a problem of uidvalidity and how to keep a permanent reference to message inside IMAP. Second, I'm not sure if this is the most robust solution in terms of backup/restore strategies, corruption of store, replication ... Positive side is that I can query IMAP using the headers because the data is mostly indexed. I don't know if key-value stores are a better approach (Casandra, Tokyo cabinet, redis). How they handle storing 1KB and 50MB of data. How they prevent corruption and when corruption or device failure happens how can I repair the store.

    Read the article

  • ValueError: too many values to unpack in a tuple

    - by falosi
    Please put some light on why am getting a too many to unpack (ValueError in my for loop).Have tried deb naislist = [('CONTROL FILE', '0', '0', '0'), ('REDO LOG', '0', '0', '0'), ('ARCHIVED LOG', '.69', '.59', '3'), ('BACKUP PIECE', '46.54', '0', '192'), ('IMAGE COPY', '0', '0', '0'), ('FLASHBACK LOG', '10.15', '6.31', '82'), ('FOREIGN ARCHIVED LOG', '0', '0', '0')] print "size of naislist is ",len((naislist)) heading = ('MAIN MENU', 'LEVELS', 'LEVEL2', 'LEVEL3') rearrange = dict(zip((0, 1, 2, 3), (len(str(x)) for x in heading))) for tu, x in naislist: rearrange.update((i, max(rearrange[i], len(str(el)))) for i, el in enumerate(tu)) rearrange[4] = max(rearrange[4], len(str(x))) forkit = '|'. join('%%-%ss' % rearrange[i] for i in xrange(0, 4)) print '\n'.join((forkit % heading, '-|-'.join(rearrange[i] * '-' for i in xrange(4)), '\n'.join(forkit % (a, b, c, d) for (a, b, c), d in naislist)))

    Read the article

  • Need help optimizing a NHibernate criteria query that uses Restrictions.In(..)

    - by Chris F
    I'm trying to figure out if there's a way I can do the following strictly using Criteria and DetachedCriteria via a subquery or some other way that is more optimal. NameGuidDto is nothing more than a lightweight object that has string and Guid properties. public IList<NameGuidDto> GetByManager(Employee manager) { // First, grab all of the Customers where the employee is a backup manager. // Access customers that are primarily managed via manager.ManagedCustomers. // I need this list to pass to Restrictions.In(..) below, but can I do it better? Guid[] customerIds = new Guid[manager.BackedCustomers.Count]; int count = 0; foreach (Customer customer in manager.BackedCustomers) { customerIds[count++] = customer.Id; } ICriteria criteria = Session.CreateCriteria(typeof(Customer)) .Add(Restrictions.Disjunction() .Add(Restrictions.Eq("Manager", manager)) .Add(Restrictions.In("Id", customerIds))) .SetProjection(Projections.ProjectionList() .Add(Projections.Property("Name"), "Name") .Add(Projections.Property("Id"), "Guid")) // Transform results to NameGuidDto criteria.SetResultTransformer(Transformers.AliasToBean(typeof(NameGuidDto))); return criteria.List<NameGuidDto>(); }

    Read the article

  • xcodeproj merge fails when adding new group

    - by user1473113
    I'm currently using Xcode with Git, and I'm experiencing some troubles during the merge process of my xcodeproj. Developer1 create a new group in Xcode file arborescence the commit and push. Developer2 on an other computer do the same with an other group name, commit and pull(with merge). The xcodeproj of Developer 2 become unreadable with Xcode. But when I create a new file or just drag and drop files from finder to repository, the merge succeed. Did someone has experienced that kind of trouble? I'm using in .gitattributes: *.pbxproj -crlf -diff merge=union # Better to treat them as binary files. *.pbxuser -crlf -diff -merge *.xib -crlf -diff -merge and in my .gitignore # Mac OS X *.DS_Store *~ # Xcode *.mode1v3 *.mode2v3 *.perspectivev3 *.xcuserstate project.xcworkspace/ xcuserdata/ *.xcodeproj/* !*.xcodeproj/project.pbxproj !*.xcodeproj/*.pbxuser # Generated files *.o *.pyc *.hi #Python modules MANIFEST dist/ build/ # Backup files *~.nib \#*# .#*

    Read the article

  • How to use multiple database in a PHP web application?

    - by Harish
    I am making a PHP web Application in which i am using MySQL as database server, i want to make backup of some tables from one database to another database(with that tables in it). i have created two different connection, but the table is not updated. $dbcon1 = mysql_connect(DB_SERVER,DB_USER,DB_PASSWORD) or die(mysql_error()); $dbase1 = mysql_select_db(TEMP_DB_NAME,$dbcon)or die(mysql_error()); $query1=mysql_query("SELECT * FROM emp"); while($row = mysql_fetch_array($query1, MYSQL_NUM)) { $dbcon2 = mysql_connect(DB_SERVER,DB_USER,DB_PASSWORD) or die(mysql_error()); $dbase2 = mysql_select_db(TEMP_DB_NAME2,$dbcon)or die(mysql_error()); mysql_query("INSERT INTO backup_emp VALUES(null,'$row[1]',$row[2])"); mysql_close($dbcon2); } the code above is taking the data of emp from first database, and updataing it into another backup_emp table of another database. the code is not working properly, is there any other way of doing this...please help.

    Read the article

  • how to get current date and time in command line

    - by Ieyasu Sawada
    I am using mysqldump to backup mysql database. Now I just need to use the current date and time as file name for the generated sql file. How do I do that if my current code looks like this: mysqldump -u root -p --add-drop-table --create-options --password= onstor >c:\sql.sql I also found this code from this site, but I do not know how to incorporate it in my current code: @echo off For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c-%%a-%%b) For /f "tokens=1-2 delims=/:" %%a in ('time /t') do (set mytime=%%a%%b) echo %mydate%_%mytime% Please help, thanks:)

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Known problems with filemtime() on Windows - files getting touched arbitrarily?

    - by Pekka
    Is there a known issue leading to file modification times of cache files on Windows XP SP 3 getting arbitrarily updated, but without any actual change? Is there some service on a standard Windows XP - Backup, Sync, Versioning, Virus scanner - known to touch files? They all have a .txt extension. If there isn't, forget it. Then I'm getting something wrong in my cache routines, and I'll debug my way through. Background: I'm building a simple caching wrapper around a slow web site on a Windows server. I am comparing the filemtime() time stamp to some columns in the data base to determine whether a cached file is stale. I'm having problems using this method because the modification time of the cache files seems to get updated in between operations without me doing anything. THis results in stale files being displayed. I'm the only user on the machine. The operating system is Windows XP, the webserver a XAMPP Apache 2 with PHP 5.2

    Read the article

  • What about the Sql transaction log

    - by Michel
    Hi, i always thought that the sql transaction log keeps track of all the transactions done in the database so it could help recovering the database file in case of a unexpected power down or something like that So then, in normal usage, when the data is committed and written to disk, it is cleared because all the data is nice and safe in the mdf file. Seeing the ldf file grow and reading some i understand that that is not the case, and it will keep growing, until: you shrink the log. Only at that point all the commited transactions are cleared and the log file is shrinked. I found some sp's who should do this, but also found the theory that you first have to backup the database? That last step doesn't make sense to me, so can anyone tell me of that is correct and if so, why that is?

    Read the article

  • Accidentally deletion of classes from XCode 3.2.5

    - by Alok Srivastava
    Accidentally my classes folder is deleted with reference from xcode(project). i try to recover them from trash but it was not present in trash. how ever i used svn for backup. but after check out the whole project when i try to run the project then it gives ann error 2012-06-05 09:46:59.651 Lisnx[527:207] Unknown class LisnxAppDelegate in Interface Builder file. 2012-06-05 09:46:59.652 Lisnx[527:207] Unknown class LisnxViewController in Interface Builder file. 2012-06-05 09:46:59.656 Lisnx[527:207] * Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key viewController.'

    Read the article

  • Get the newest file from directory structure year/month/date/time

    - by Radek
    I store backups of databases in a directory structure year/month/day/time/backup_name an example would be basics_mini/2012/11/05/012232/RATIONAL.0.db2inst1.NODE0000.20110505004037.001 basics_mini/2012/11/06/012251/RATIONAL.0.db2inst1.NODE0000.20110505003930.001 note that timestamp from the backup file cannot be used. Before the automation testing starts the server time is set to 5.5.2011 So the question is how I can get the latest file if I pass the "base directory" (basics_mini) to some function that I am going to code. My thoughts are that I list the base directory and sort by time to get the year. Then I do the same for month, day and time. I wonder if there is any "easier" solution to that in php.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >