Search Results

Search found 34699 results on 1388 pages for 'database backup'.

Page 460/1388 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Program shortcuts disappearing in Windows Mobile 2003, any way to get them back?

    - by Carlisle White
    I have a WM2003 device with some programs installed on it and a full backup created and saved to a SD card. If the device runs out of charge for some time (or the battery removed) everything is reseted back to defaults, so the custom programs and configs are gone. When this happens I used to restore the full backup to put everything back to normal again. But I've recently installed TomTom Navigator 7 and for some reason, its shortcut in the "Programs" section is not saved when creating a full backup (with the eBackup app provided) and the installation doesn't create a shortcut in the main screen (as version 6 used to do). Is there any way to make this shortcut persistent? Is there any way to create custom shortcuts in the programs section or in the main screen (preferably)? Thank you very much for your help, anything is welcomed.

    Read the article

  • crontab not executing all lines

    - by kiasecto
    I have a sudo crontab like this to sync time: # m h dom mow dow command 0 6 * * * ntpdate 10.3.3.3 >> /var/mylog/ntp.log 0 7 * * * /var/mylog/backup.sh >> /var/mylog/backup.log The problem I am having is that the first line (ntpdate) never seems to execute. If I run it manually with sudo that line works. cron does run the backup.sh at the 7, but it never executes then ntp sync at 6. The syslog doesn't seem to show anything. System is Ubuntu 10.04 LTS.

    Read the article

  • asus eee pc 1018p os installation

    - by cornerback84
    My friend has an asus eee pc 1018p. It has no cd/dvd drive (neither do he have a usb cd/dvd drive). The OS wasn't working fine, so we decided to restore the system using the provided OS backup from HD. But midway to installation it was interrupted and the computer was restarted (not h/w or s/w issue but we did it). Now we cannot run the backup and we also tried to install windows 7 through USB, but as soon as we start to install the OS, it says that some device driver is missing. It turns out that the OS needs usb device driver to continue. It has USB 3.0 maybe thats why the OS needs the driver. I tried disabling 3.0 and enabling 2.0 but then it does not boot from USB drive for some reason. We are stuck with this. The backup doesn't runs and when booting from USB it says that it needs device driver. Anyone has any idea what we should do?

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • SQL Maintenance Cleanup Task 'Success' But not deleting files

    - by Seph
    I have a maintenance plan setup for a databases on a server. As part of the backup is a Maintenance Cleanup Task. SQL Version 2008 The task that 'succeeds' is setup as: Delete backup files Correct folder (same address as the backup task) File extension: bak (NOT .bak) Delete files older than: 20 Hour(s) I have other similar cleanup tasks that occur in the same maintenance plan which work fine. This plan has worked fine in the past, I just noticed that last night it reported 'success' and the rest of the plan continued, however the file from 2 days ago still remains. I have checked similar questions such as this question, and this is not the case as my maintenance task worked fine two days ago and for the past several weeks:

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • Remove Duplicates from List of HashMap Entries

    - by HonorGod
    I have a List<HashMap<String,Object>> which represents a database where each list record is a database row. I have 10 columns in my database. There are several rows where the values of 2 particular columns are equals. I need to remove the duplicates from the list after the list is updated with all the rows from database. What is the efficient way? FYI - I am not able to do distinct while querying the database, because the GroupName is added at a later stage to the Map after the database is loaded. And since Id column is not primary key, once you add GroupName to the Map. You will have duplicates based on Id + GroupName combination! Hope my question makes sense. Let me know if we need more clarification.

    Read the article

  • Prevent Win7 boot loader from taking over the WinXP boot loader

    - by Chris
    My setup: 1 physical hard drive (500gb divided equally into 2 partitions) Windows XP Partition (Current OS) Empty Partition where I will be installing Windows 7 My question is how do I prevent the Windows7 boot loader from taking over my WindowsXP boot loader when installing the new OS ? The reason I am asking is because I already have a ghosted backup of my WinXP partition and if I ever need to restore my xp partition using that backup, would it not overwrite the Windows7 boot loader that was placed in the XP partition with the one from the backup, thus making windows 7 unable to boot. Also what would happen if I decided to delete the Windows XP partition altogether somewhere down the road and along with it the Win7 boot loader that was placed there, wouldn't that cause the system not to boot at all.. To avoid these issues, I simply want to make sure that BOTH the Win7 and WinXP boot loaders are available on their respective partitions and they do not interfere with each other in any way. Is this possible? Thx, Chris

    Read the article

  • Using multiple wifi connections simultaneously on Windows

    - by Salman A
    My office PC has a one wireless network card and there are three available wifi connections: primary, backup and backup of a backup (grin). Is it possible for me to use all three simultaneously. If this results in an increase in bandwidth that's well and good, but primary reason is every now and then one of the network fails and i have to switch back and forth between the available networks by disconnecting, viewing available networks and connecting to next one hoping its running. Do i need more than one network card or a software e.g. a proxy.

    Read the article

  • Using multiple wifi connections simultaneously on Windows

    - by Salman A
    My office PC has a one wireless network card and there are three available wifi connections: primary, backup and backup of a backup (grin). Is it possible for me to use all three simultaneously. If this results in an increase in bandwidth that's well and good, but primary reason is every now and then one of the network fails and i have to switch back and forth between the available networks by disconnecting, viewing available networks and connecting to next one hoping its running. Do i need more than one network card or a software e.g. a proxy.

    Read the article

  • How do I make more available space on a Time Machine hard disk?

    - by Daryl Spitzer
    I upgraded my MacBook Pro to Snow Leopard, and made some other changes that have caused my next Time Machine backup to be quite large. Previous to the upgrade my backup drive had filled up, so Time Machine was deleting old backups to make room for new ones. When Time Machine started the first backup after the upgrade, it displayed a message that it was freeing up space. But it wasn't able to free up enough: (The disk has 320 GB capacity.) How can I free up more space on the disk (without reformatting or deleting all the existing backups)? I don't want to recklessly delete files and take the risk of confusing Time Machine.

    Read the article

  • ESXI ftpput fails Syntax problem

    - by Datapimp23
    I'm trying to ftpput my virtual machines dirs to our NAS. Which doesn't support NFS. Only FTP and samba. So I'm in the ESXi console and enter the followin command ftpput ipaddress /vmfs/volumes/4a1157e1-be81171a-1b39-001d09080124/VMNAME /Backup /Backup is a public share on the nas, I can access it through any ftp client. After I enter I get the following ftpput: can't open 'Backup': No such file or directory I'm kind of in the dark here. Any suggestions?

    Read the article

  • sql locking on silverlight app

    - by immuner
    Hi, i am not sure if this is the correct term, but this is what id like to do: I have an application that uses a mssql database. This application can operate in 3 modes. mode 1) user does not alter, but only read the database mode 2) user can add rows (one at a time) onto a table in the database mode 3) user can alter several tables in the database (one person at a time) question 1) how can i ensure that when a user in in mode 3 that the database will "lock" and all logged in users who operate in mode 2 or mode 3 will not be able to change the database until he finishes? question 2) how can i ensure that while there are several users in mode 2, that there will be no conflict while they all update the table? my guess here, is that before adding a new row, you make a server query for the table's current unique keys and add the new entry. will this be safe enough though? Thanks

    Read the article

  • Bacula Director and Storage in LAN

    - by B14D3
    I have two networks LAN and DMZ.. Machines in DMZ are accesible from internet ( only over http). In LAN I have servers that see all LAN and all DMZ machines but machinse from DMZ don't see any LAN servers. Machines in LAN have access only to all LAN and DMZ, no direct access to internet and no access from internet. DMZ <------ LAN DMZ ----X--->LAN I'm planning to configure Bacula as major backup system. My plan is to install Bacula Director and Storage deamon on the same server in LAN for safety reasons. So my question is: Will this configuration work, is it posible for bacula director and storage deamon installed on server in LAN to makes backup servers that are in my DMZ? Or in this network configuration Bacula should be in DMZ? (If yes will I can backup with it servers in LAN ?)

    Read the article

  • Time capsule on windows 7

    - by Kiva
    Hi guys, I have a time machine to backup my mac book pro. All work fine with it. Now, my girlfriend have a PC on windows 7. She wants to backup her PC with cobian backup on the time machine. But her PC doesn't see the time capsule, so it's impossible to connect it. The Time capsule is connected on my box adsl with wifi and the mac and the pc are connected on the box with wifi. Why windows doesn't see the TC ? I installed "bonjour" on the PC but nothing worked. Thanks for your help.

    Read the article

  • Use a folder of xml files as data source for nhibernate

    - by Bart Van Eyndhoven
    I'm going to start writing NUnit tests for a few classes in my project. A certain number of these classes use data gathered through nhibernate from a sql server 2008 database. The part of the program I'm about to test is very specific (and complicated). Therefore I have made a folder of xml files. Combined, the xml files could result in the database structure. I mean each xml file corresponds to a table in the database. The data in the xml files is also consistent with the database. Is there a way to use this folder of xml files as data source for nhibernate? I mean: can I use nhibernate to gather my test data (wich I have specifically chosen) instead of data from the database? In this way, I could usefully test this component without corrrupting the (test) database for future tests.

    Read the article

  • Mocking a concrete class : templates and avoiding conditional compilation

    - by AshirusNW
    I'm trying to testing a concrete object with this sort of structure. class Database { public: Database(Server server) : server_(server) {} int Query(const char* expression) { server_.Connect(); return server_.ExecuteQuery(); } private: Server server_; }; i.e. it has no virtual functions, let alone a well-defined interface. I want to a fake database which calls mock services for testing. Even worse, I want the same code to be either built against the real version or the fake so that the same testing code can both: Test the real Database implementation - for integration tests Test the fake implementation, which calls mock services To solve this, I'm using a templated fake, like this: #ifndef INTEGRATION_TESTS class FakeDatabase { public: FakeDatabase() : realDb_(mockServer_) {} int Query(const char* expression) { MOCK_EXPECT_CALL(mockServer_, Query, 3); return realDb_.Query(); } private: // in non-INTEGRATION_TESTS builds, Server is a mock Server with // extra testing methods that allows mocking Server mockServer_; Database realDb_; }; #endif template <class T> class TestDatabaseContainer { public: int Query(const char* expression) { int result = database_.Query(expression); std::cout << "LOG: " << result << endl; return result; } private: T database_; }; Edit: Note the fake Database must call the real Database (but with a mock Server). Now to switch between them I'm planning the following test framework: class DatabaseTests { public: #ifdef INTEGRATION_TESTS typedef TestDatabaseContainer<Database> TestDatabase ; #else typedef TestDatabaseContainer<FakeDatabase> TestDatabase ; #endif TestDatabase& GetDb() { return _testDatabase; } private: TestDatabase _testDatabase; }; class QueryTestCase : public DatabaseTests { public: void TestStep1() { ASSERT(GetDb().Query(static_cast<const char *>("")) == 3); return; } }; I'm not a big fan of that compile-time switching between the real and the fake. So, my question is: Whether there's a better way of switching between Database and FakeDatabase? For instance, is it possible to do it at runtime in a clean fashion? I like to avoid #ifdefs. Also, if anyone has a better way of making a fake class that mimics a concrete class, I'd appreciate it. I don't want to have templated code all over the actual test code (QueryTestCase class). Feel free to critique the code style itself, too. You can see a compiled version of this code on codepad.

    Read the article

  • Extending mysqli and using multiple classes

    - by Mikk
    Hi, I'm new to PHP oop stuff. I'm trying to create class database and call other classes from it. Am I doing it the right way? class database: class database extends mysqli { private $classes = array(); public function __construct() { parent::__construct('localhost', 'root', 'password', 'database'); if (mysqli_connect_error()) { $this->error(mysqli_connect_errno(), mysqli_connect_error()); } } public function __call($class, $args) { if (!isset($this->classes[$class])) { $class = 'db_'.$class; $this->classes[$class] = new $class(); } return $this->classes[$class]; } private function error($eNo, $eMsg) { die ('MySQL error: ('.$eNo.': '.$eMsg); } } class db_users: class db_users extends database { public function test() { echo 'foo'; } } and how I'm using it $db = new database(); $db->users()->test(); Is it the right way or should it be done another way? Thank you.

    Read the article

  • Connecting to 3rd party databse in Joomla!?

    - by Michael
    I need to connect to another database in Joomla! that's on another server. This is for a plugin and I need to pull some data from a table. Now what I don't want is to use this database to run Joomla!, I already have Joomla! installed and running on its own database on its server but I want to connect to another database (ON TOP of the current one) to pull some data, then disconnect from that 3rd party database - all while keeping the original Joomla database connection in tact.

    Read the article

  • Upgrading from MySQL Server to MariaDB

    - by Korrupzion
    I've heard that MariaDB has better performance than MySQL-Server. I'm running software that makes an intensive use of MySQL, thats why I want to try upgrading to MariaDB. Please tell me your experiences doing this conversion, and instructions or tips. Also, which files I should take care of for making a backup of MySQL-Server, so if something goes wrong with MariaDB, I could rollback to MySQL without issues? I would use this but i'm not sure if it's enough to get a full backup of MySQL-Server confs and databases mysqldump --all-databases backup /etc/mysql My Environment: uname -a (Debian Lenny) Linux charizard 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux MySQL Server Version: Server version 5.0.51a-24+lenny4 MySQL Client: 5.0.51a Statistics: Threads: 25 Questions: 14690861 Slow queries: 9 Opens: 21428 Flush tables: 1 Open tables: 128 Queries per second avg: 162.666 Uptime: 1 day 1 hour 5 min 13 sec Thanks! PS: Rate my english :D

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • Deployment of a .NET application making use of SQL Server 2008

    - by Victor John Saliba
    I have searched the internet thoroughly for this type of issue, there were responses but hasn't really found a concrete solution yet. I have an application which makes use of SQL Server 2008 R2 and thus it makes connections with a database file which I have set up. The application executes successfully, makes connections with the database and retrieves/inserts/updates data to and fro the database. However when I come to create a deployment project i.e. a setup project, I fail to transfer my database files to other computers and make database connections. I have checked the SQL Server 2008 prerequisite in the publish settings of the application and has also included the database files. Can anyone suggest the best way to this type of setup? Thanks

    Read the article

  • Problem with testing a Windows service

    - by prateeksaluja20
    I want to make a Windows service that will access my database. My database is SQL Server 2005. Actually I am working on a website and my database is inside our server. I need to access my database every second and update the records. For that purpose I need to make a Windows service that will install into our server and perform the task. I have been accessing the database from my local machine and then running the service, but problem is I'm not sure how I can test this service. I tried to install into my local machine. It installed and then I ran the service but it did not perform the task and I think service is not able to connect with the database. There is no problem in the service nor its installer. The only issue is how to test my Windows service.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >