Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 634/1255 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • How to read, edit and write xls files, and then export to SQL Server

    - by tuanvt
    I have an excel file that have the list of contacts( about 10 k of them) that I need to push into my SQL Server database. So, I am writing an .net windows program using visual studio 2008 to read the files, generate random password for each contact, and then push these information in to my SQL Server database. It was easy to handle excel file in 2003 but now my computer have office 2007 in it and things seem to changed. I am digging on Microsoft.Office.Interop.Excel but it is seem to be a lot more complicated than before.

    Read the article

  • Running a ProgressDialog MonoAndroid

    - by user1791926
    I am trying to run a progressDialog that will loading items into a Sqlite datebase on a first load for my application. I get an error message because the application runs the rest of the code in the application before the rest of the data is loaded into the database. How do I make sure the code is completed in the progressDialog before the code in the rest of the program? LocalDatabase DB = new LocalDatabase(); var dbpd = ProgressDialog.Show(this, "Loading Database", "Please wait Loading Data",true); ThreadPool.QueueUserWorkItem((s) =>{ DB.createDB(); RunOnUiThread(() => databaseLoaded()); });

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • How to deploy updates to .NET website in cluster

    - by royappa
    We are operating a corporate web application on a load-balanced cluster that consists of two identical IIS servers talking to a single MSSQL database. To deploy updates I am using this primitive process: 1) Make a copy of the entire site folder (wwwroot\inetpub\whatever) on each IIS box 2) Download the updated, compiled files onto each IIS box from our development area 3) Shut down IIS both web servers 4) Copy the new and updated files into the wwwroot folder (overwriting any same files) 5) Then restart IIS on both machines When there are database changes involved there are a few other steps. The whole process is fairly quick but it is ugly and fraught with danger, so it has to be done with full concentration. I would like to just push one button to make it all happen. And I want a one-click rollback in case there is a problem (that's the reason I make the copy in step #1). I am looking for tools to manage and improve this process. If it also helped us maintain a changelog, that would be nice. Thanks.

    Read the article

  • What is the best way to make a game timer in Actionscript 3?

    - by Nuthman
    I have built an online game system that depends on a timer that records how long it took a player to complete a challenge. It needs to be accurate to the millisecond. Their time is stored in a SQL database. The problem is that when I use the Timer class, some players are ending up getting scores in the database of less than a second. (which is impossible, as most challenges would take at least 11 seconds to complete even in a perfect situation.) What I have found is that if a player has too many browser windows open, and/or a slow computer, the flash game slows down actually affecting the timer speed itself. The timer is 'spinning' on screen so you can physically see the numbers slowing down. It is frustrating that I cannot just open a second thread or do something to allow flash to keep accurate time regardless of whatever else is going on in the program. Any ideas?

    Read the article

  • Stop Entity Framework from updating edmx model with a column that isn't needed

    - by TMan
    I have rowguids in all my tables to help with change tracking in all my tables. I don't want/need these tables in my edmx or my entities. However, I do still need to make changes to other things sometimes so everytime i go to update model from database in the edmx it adds all the rowguids in all my tables everytime and i have to manually delete each one. Is there a way to handle this from happening? Is there a way I can maybe edit the T4 to maybe ignore that 'rowguid' column? Database first Entity framework

    Read the article

  • Export XML with only one MySQL request ?

    - by mere-teresa
    I want to export in XML format some data from 7 tables (MySQL database), and then I want to import in another database. And I have a update or insert rule for data. I already have a SQL query retrieving all data, with JOINs on my 7 tables. But...when I try to put data in XML format, I reach a limit. My PHP loop can catch each row, but I would like to benefit from hierachical structure of the XML, and all I have are rows with the same data repeated. It is better to query once and to construct the XML tree in PHP or to query each time I want access to a lower level ?

    Read the article

  • Allow users to pull temporary data then delete table?

    - by JM4
    I don't know the best way to title this question but am trying to accomplish the following goal: When a client logs into their profile, they are presented with a link to download data from an existing database in CSV format. The process works, however, I would like for this data to be 'fresh' each time they click the link so my plan was - once a user has clicked the link and downloaded the CSV file, the database table would 'erase' all of its data and start fresh (be empty) until the next set of data populated it. My EXISTING CSV creation code: <?php $host = 'localhost'; $user = 'username'; $pass = 'password'; $db = 'database'; $table = 'tablename'; $file = 'export'; $link = mysql_connect($host, $user, $pass) or die("Can not connect." . mysql_error()); mysql_select_db($db) or die("Can not connect."); $result = mysql_query("SHOW COLUMNS FROM ".$table.""); $i = 0; if (mysql_num_rows($result) > 0) { while ($row = mysql_fetch_assoc($result)) { $csv_output .= $row['Field'].", "; $i++; } } $csv_output .= "\n"; $values = mysql_query("SELECT * FROM ".$table.""); while ($rowr = mysql_fetch_row($values)) { for ($j=0;$j<$i;$j++) { $csv_output .= '"'.$rowr[$j].'",'; } $csv_output .= "\n"; } $filename = $file."_".date("Y-m-d",time()); header("Content-type: application/vnd.ms-excel"); header("Content-disposition: csv" . date("Y-m-d") . ".csv"); header( "Content-disposition: filename=".$filename.".csv"); print $csv_output; exit; ?> any ideas?

    Read the article

  • Messaging in local network with .NET

    - by Richard
    Hi All, I need to implement some form of communication mechanism in my application, to send notifications/messages from one application instance to all the others. This is a normal scenario where someone adds and item or deletes and item and you want to notify other users that this has happened. The application runs on the client and connects to a database on the local network. So its not like all clients access a server instance of the application. So from what I know I could use MessageQueues or some form of Database polling where I have a table that stores all the messages (not ideal). Issue is I need to implement this very quickly, so sadly can't go very complex but need the quickest easiest solution. thanks for the help!

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • Github + keep file but dont track changes

    - by Mike
    I have a codeigniter framework thats using github. Within this application I have several files that i will want to have in the repo but not track any changes on.. Example is: i deploy a new installation of this framework to a new client, i want the following files to be downloaded (they have default values 'CHANGEME') and i just have to make changes specific to this client IE(database credentials, email address info, custom css styling). // the production config files i want the files but they need to be updated to specific client needs application/config/production/config.php application/config/production/database.php application/config/production/tank_auth.php // index page, defines the environment (production|development) /index.php // all of the css/js cache (keep the folder but not the contents) /assets/cache/* // production user based styling (color, fonts etc) needs to be updated specific to client needs /assets/frontend/css/user/frontend-user.css currently if i run git clone [email protected]:user123/myRepo.git httpdocs and then i edit the files above, all is great.. until i release a hotfix or patch and run git pull. All of my changes are then overwritten.

    Read the article

  • How do you display a PHPmyadmin table onto a web browser/web page?

    - by user1390754
    Just a simple question, i've been searching around and for some reason i cannot find an answer to this. after creating a database/table in Phpmyadmin using xampp, what command do i need to put into my webpage (PHP) to show the table I made? I know the first step involves connecting to the database and i think i've done that properly. This is the code i found from somewhere about connecting (w3 tutorials I believe) $con = mysql_connect("localhost","xxxxxx,""); if (!$con) { die('Could not connect: ' . mysql_error()); }

    Read the article

  • Error comparing hash to hashed mysql password (output values are equal)

    - by Charlie
    Im trying to compare a hashed password value in a mysql database with the hashed value of an inputted password from a login form. However, when I compare the two values it says they aren't equal. I removed the salt to simply, and then tested what the outputs were and got the same values $password1 = $_POST['password']; $hash = hash('sha256', $password1); ...connect to database, etc... $query = "SELECT * FROM users WHERE username = '$username1'"; $result = mysql_query($query); $userData = mysql_fetch_array($result); if($hash != $userData['password']) //incorrect password { echo $hash."|".$userData['password']; die(); } ...other code... Sample output: 7816ee6a140526f02289471d87a7c4f9602d55c38303a0ba62dcd747a1f50361| 7816ee6a140526f02289471d87a7c4f9602d55c38303a0ba62dcd747a1f50361 Any thoughts?

    Read the article

  • limiting the rate of emails using python

    - by Ali
    I have a python script which reads email addresses from a database for a particular date, example today, and sends out an email message to them one by one. It reads data from MySQL using the MySQLdb module and stores all results in a dictionary and sends out emails using : rows = cursor.fetchall () #All email addresses returned that are supposed to go out on todays date. for row is rows: #send email However, my hosting service only lets me send out 500 emails per hour. How can I limit my script from making sure only 500 emails are sent in an hour and then to check the database if more emails are left for today or not and then to send them in the next hour. The script is activated using a cron job.

    Read the article

  • How to avoid resending data on refresh in php

    - by Priyanka
    Hello.I have a page "index.php" where i have a link called "add_users.php".In "add_users.php",i accept user information and come back to the same page "index.php" where information comes through post action and gets inserted into the database.When i refresh the page orhit back button,resend box appears.I went through many solution where they asked me to create third page.I tried doing that as follows:After inserting values in database,I redirected ht page as header('Location:http://mysite.com/thankyou.php, and in thankyou.php I again redirected the page to index.php.But getting warning as Cannot modify header information - headers already sent by.... provide me a better solution. Thank You in advance.

    Read the article

  • Authenticating model - best practices

    - by zerkms
    I come into ASP.NET from php so the reason why i ask my question is because it's totally different nature of how application works and handles requests. well, i have an exists table with user creditians, such as: id, login, password (sha hashed), email, phone, room i have built custom membership provider so it can handle my own database authentication schema. and now i'm confused, because User.Identity.Name contains only user's login, but not the complete object (i'm using linq2sql to communicate with database and i need in it's User object to work). at php applications i just store user object at some static method at Auth class (or some another), but here at ASP.NET MVC i cannot do this, because static member is shared across all requests and permanent, and not lives within only current request (as it was at php). so my question is: how and where should i retrieve and store linq2sql user object to work with it within current and only current request? (after request processed successfully i expect it will be disposed from memory and on next request will be created again). or i'm following totally wrong way?

    Read the article

  • Trouble Starting MySL Community Server on Windows 7

    - by CodeAngel
    I have installed Netbeans 7 on my Windows 7. In addition, the MySQL Community Server 5.6.12 is installed with the MSI installer on thesame 7 PC. The MySQL server is integrated with the Netbeans IDE. However , it is not possible to start or stop the MySQL server from the command prompt or the Netbeans IDE. I am only able to start or stop the server from the Windows 7 services tool. Also , it is difficult running SQL queries from the Netbeans IDE even though it shows there is connection with the MySQL server. I have added the my.ini file to the installed directory of the MySQL server , that is : C:\Program Files\MySQL\MySQL Server 5.6 below is the my.ini file : # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... port = 3306 # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Any suggestion is welcomed.

    Read the article

  • A question about making a C# class persistant during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • Dynamically changing databases in SQL Server 2000

    - by spuppett
    At work we have a number of databases that we need to do the same operations on. I would like to write 1 SP that would loop over operations and set the database at the beginning of the loop (example to follow). I've tried sp_executesql('USE ' + @db_id) but that only sets the DB for the scope of that stored procedure. I don't really want to loop with hard coded database names because we need to do similar things in many different places and it's tough to remember where things need to change if we add another DB. Any thoughts Example: DECLARE zdb_loop CURSOR FAST_FORWARD FOR SELECT distinct db_id from DBS order by db_id OPEN zdb_loop FETCH NEXT FROM zdb_loop INTO @db_id WHILE @@FETCH_STATUS = 0 BEGIN USE @db_id --Do stuff against 3 or 4 different DBs FETCH NEXT FROM zdb_loop INTO @db_id END CLOSE zdb_loop DEALLOCATE zdb_loop

    Read the article

  • how to don't store a repeated field of a Symfony form?

    - by user454760
    Hello everybody, I am working with Symfony 1.4 and Doctrine. I have a model A with a email field. The form of A, display an input in which the user should insert the email correctly. But as everybody know sometiemes they don't do it. To fix this i have insert an extra field in the model (and in the form), called *repeat_email* to prevent the misspellings. Then, in the validation process, after validates all the fields, i use a global validator to compare the data of the two fields. This works, but i don't want to have the email stored two times in the database (i don't want the *repeat_email*). Is there any mechanism to use it in the validation process, but not to store it in the database? Thanks,

    Read the article

  • VS 2008 "Choose Data Source" wizard

    - by ELM
    Good Day, I'm using Visual Studio Professional 2008 SP 1. When I create a connection via the designer, the "Choose Data Source" dialog only lists the following data sources: Microsoft SQL Server Compact 3.5 Microsoft SQL Server Database File When I create a connection on the Server explorer the list is complete with : Microsoft SQL Server Compact 3.5, Microsoft SQL Server Database File, Microsoft SQL Server Compact, ODBC etc. Please help me out. I need to use SQL Server Compact. I have posted the same problem on the following thread with some screenshots: http://social.msdn.microsoft.com/Forums/en/vssetup/thread/906845c3-69e9-431a-ad07-7da2de684d33

    Read the article

  • PHP: How to get the days of the week?

    - by fwaokda
    I'm wanting to store items in my database with a DATE value for a certain day. I don't know how to get the current Monday, or Tuesday, etc. from the current week. Here is my current database setup. menuentry id int(10) PK menu_item_id int(10) FK day_of_week date message varchar(255) So I have a class setup that holds all the info then I was going to do something like this... foreach ( $menuEntryArray as $item ) { if ( $item->getDate() == [DONT KNOW WHAT TO PUT HERE] ) { // code to display menu_item information } } So I'm just unsure what to put in "[DONT KNOW WHAT TO PUT HERE]" to compare to see if the date is specified for this week's Monday, or Tuesday, etc. The foreach above runs for each day of the week - so it'll look like this... Monday Item 1 Item 2 Item 3 Tuesday Item 1 Wednesday Item 1 Item 2 ... Thanks!

    Read the article

  • Android-USB Connectivity

    - by neoHacker
    Hi all, I'm in the middle of an android app that need to check whether the device is connected to another system or pendrive through usb and if it is connected i need to send a copy of my database file through usb port. This sis for backing up my database. I have no idea how to prompt for usb connections. I searched the net. But no results!.Can anyone please help. Because i'm stuck here at my project. Thanks in advance.

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >