Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 634/1255 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • Entity framework Update fails when object is linked to a missing child

    - by McKay
    I’m having trouble updating an objects child when the object has a reference to a nonexising child record. eg. Tables Car and CarColor have a relationship. Car.CarColorId CarColor.CarColorId If I load the car with its color record like so this var result = from x in database.Car.Include("CarColor") where x.CarId = 5 select x; I'll get back the Car object and it’s Color object. Now suppose that some time ago a CarColor had been deleted but the Car record in question still contains the CarColorId value. So when I run the query the Color object is null because the CarColor record didn’t exist. My problem here is that when I attach another Color object that does exist I get a Store update, insert error when saving. Car.Color = newColor Database.SaveChanges(); It’s like the context is trying to delete the nonexisting color. How can I get around this?

    Read the article

  • A question about making a C# class persistant during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • How to deploy updates to .NET website in cluster

    - by royappa
    We are operating a corporate web application on a load-balanced cluster that consists of two identical IIS servers talking to a single MSSQL database. To deploy updates I am using this primitive process: 1) Make a copy of the entire site folder (wwwroot\inetpub\whatever) on each IIS box 2) Download the updated, compiled files onto each IIS box from our development area 3) Shut down IIS both web servers 4) Copy the new and updated files into the wwwroot folder (overwriting any same files) 5) Then restart IIS on both machines When there are database changes involved there are a few other steps. The whole process is fairly quick but it is ugly and fraught with danger, so it has to be done with full concentration. I would like to just push one button to make it all happen. And I want a one-click rollback in case there is a problem (that's the reason I make the copy in step #1). I am looking for tools to manage and improve this process. If it also helped us maintain a changelog, that would be nice. Thanks.

    Read the article

  • Android-USB Connectivity

    - by neoHacker
    Hi all, I'm in the middle of an android app that need to check whether the device is connected to another system or pendrive through usb and if it is connected i need to send a copy of my database file through usb port. This sis for backing up my database. I have no idea how to prompt for usb connections. I searched the net. But no results!.Can anyone please help. Because i'm stuck here at my project. Thanks in advance.

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • PHP MySQL Insert Help

    - by user364333
    Hey I am trying to make a page that inserts some strings into a MySQL table but it just dosn't seem to be working for me. Here is the code I am using at the moment. <?php mysql_connect($address, $username, $password); @mysql_select_db($database) or die("Unable to select database"); $query = "insert INTO user (movieid, moviename)('" . $id . "','" . $name . "') or die(mysql_error())"; mysql_query($query); mysql_close(); ?> Where am i going wrong?

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Accessing TextView from another class

    - by Jenny
    I've got my main startup class loading main.xml but I'm trying to figure out how to access the TextView from another class which is loading information from a database. I would like to publish that information to the TextView. So far I've not found any helpful examples on Google. EDIT: This is my class that is doing the Database work: import android.widget.TextView;import android.view.View; public class DBWork{ private View view; ... TextView tv = (TextView) view.findViewById(R.id.TextView01); tv.setText("TEXT ME") Yet, everytime I do that I get a nullpointerexception

    Read the article

  • Github + keep file but dont track changes

    - by Mike
    I have a codeigniter framework thats using github. Within this application I have several files that i will want to have in the repo but not track any changes on.. Example is: i deploy a new installation of this framework to a new client, i want the following files to be downloaded (they have default values 'CHANGEME') and i just have to make changes specific to this client IE(database credentials, email address info, custom css styling). // the production config files i want the files but they need to be updated to specific client needs application/config/production/config.php application/config/production/database.php application/config/production/tank_auth.php // index page, defines the environment (production|development) /index.php // all of the css/js cache (keep the folder but not the contents) /assets/cache/* // production user based styling (color, fonts etc) needs to be updated specific to client needs /assets/frontend/css/user/frontend-user.css currently if i run git clone [email protected]:user123/myRepo.git httpdocs and then i edit the files above, all is great.. until i release a hotfix or patch and run git pull. All of my changes are then overwritten.

    Read the article

  • limiting the rate of emails using python

    - by Ali
    I have a python script which reads email addresses from a database for a particular date, example today, and sends out an email message to them one by one. It reads data from MySQL using the MySQLdb module and stores all results in a dictionary and sends out emails using : rows = cursor.fetchall () #All email addresses returned that are supposed to go out on todays date. for row is rows: #send email However, my hosting service only lets me send out 500 emails per hour. How can I limit my script from making sure only 500 emails are sent in an hour and then to check the database if more emails are left for today or not and then to send them in the next hour. The script is activated using a cron job.

    Read the article

  • Exporting MS SQL Schema and Data

    - by stringo0
    I'm used to MySQL and PHPMyAdmin - I had to switch over to MSSQL for an ASP.net project, and I'm having tons of trouble. I'm using the express version of SQL 2008, with SQL Server Management Studio. The following are 2 questions I've been struggling with for a while: 1) How do I export the DB schema for the database? The table structure, etc.? 2) How do I export all the data in the database? Ideally I'd like to have a .sql file that can be run wherever I need the schema or data duplicated, for example a co-worker's computer for a shared project, or online when the project is being hosted. Thanks!

    Read the article

  • What pattern is this? php

    - by user151841
    I have several classes that are basically interfaces to database rows. Since the class assumes that a row already exists ( __construct expects a field value ), there is a public static function that allows creation of the row and returns an instance of the class. Here's an example ( without the actual database inserts ): class selfStarter { public $type; public function __construct( $type ) { $this->type = $type; } public static function create( $type ) { if ( ! empty($type) ) { $starter = & new selfStarter($type); return $starter; } } } $obj1 = selfStarter::create( "apple" ); $obj2 = & new selfStarter( "banana" ); What is this pattern called?

    Read the article

  • Unique identification string in php

    - by NardCake
    Currently me and my friend are developing a website, for what we will call 'projects' we just have a basic auto increment id in the database used to navigate to projects such as oururl.com/viewproject?id=1 but we started thinking, if we have alot of posted projects thats going to be a LONG url. So we need to somehow randomly generate a alphanumerical string about 6 characters long. We want the chance of the string being duplicated being extremely low and of course we will query the database before assigning an identifier. Thanks for anyhelp, means alot!

    Read the article

  • How to use GPS data like the double value returned by getLatitude()?

    - by Dan
    I have been searching quite a bit for an answer, but maybe I'm just not using the correct terminology. I am creating an app that will access a database to return a list of other users that are within a certain distance of the users location. I've never worked with this type of data, and I don't really know what the values mean. I'd like to do all the calculations on the backend with either MySQL or PHP. Currently, I am storing the latitude and longitude as doubles within the database. I can access them and store them, but I have no idea how I might be able to sort them based on distance. Perhaps I should be using a different type or some technique that is common in this area. TIA.

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • form data posted using linq-to-sql not showing until I refresh the page

    - by PeteShack
    I have an asp.net mvc app with a form. When you submit the form, it adds records to the sql database with linq-to-sql. After adding the records, the controller displays the form again, and should show those new values on the form. But, when it displays the form, the values are blank, until you refresh the page. While tracing through the code, I can see the records being added to the database when they are submitted, but the view doesnt display them, unless I refresh. The view is not the problem, it just displays the view model, which is missing the new records immediately after the post. I know this is kind of vague, but wasnt sure what parts of code to include here. Could this have something to do with data context life cycle? Basically, there is a data context created when the form is posted, then a different data context is created in the method that displays the form. Any suggestions on what might be causing this?

    Read the article

  • PHP: How to get the days of the week?

    - by fwaokda
    I'm wanting to store items in my database with a DATE value for a certain day. I don't know how to get the current Monday, or Tuesday, etc. from the current week. Here is my current database setup. menuentry id int(10) PK menu_item_id int(10) FK day_of_week date message varchar(255) So I have a class setup that holds all the info then I was going to do something like this... foreach ( $menuEntryArray as $item ) { if ( $item->getDate() == [DONT KNOW WHAT TO PUT HERE] ) { // code to display menu_item information } } So I'm just unsure what to put in "[DONT KNOW WHAT TO PUT HERE]" to compare to see if the date is specified for this week's Monday, or Tuesday, etc. The foreach above runs for each day of the week - so it'll look like this... Monday Item 1 Item 2 Item 3 Tuesday Item 1 Wednesday Item 1 Item 2 ... Thanks!

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • MySQL: How to separate a name field in one table into firstname / lastname in two separate tables?

    - by Eileen
    I have a drupal database where the node table is full of profiles. The field node.title is "Firstname Lastname". I want to separate the names so that node.title = "Firstname", and over in another table entirely, content_type_profile.field_lastname_value = "Lastname". The entries in the two tables can be joined on the field nid. I'd love to run a SQL command to do this, and I am fine with taking the naive approach that the first word is the first name, and everything else in the field is last name -- it will mean a few manual corrections down the line, but that's much better than doing it all by hand in the first place. (I read this question and surely the answer lies in there but I am not that SQL-savvy and am not sure how to make it work for my database.) Thanks!

    Read the article

  • loop for Cursor1.moveToPosition() in android

    - by Edward Sullen
    I want to get data in the first column of all row from my database and convert to String[] ... List<String> item1 = new ArrayList<String>(); // c is a cursor which pointed from a database for(int i=0;i<=nombre_row;i++) { c.moveToPosition(i); item1.add(c.getString(0)); } String[] strarray = new String[item1.size()]; item1.toArray(strarray ); I've tried to command step by step, and found that the problem is in the Loop for.... Please help... thanks in advance for all answer.

    Read the article

  • How do you display a PHPmyadmin table onto a web browser/web page?

    - by user1390754
    Just a simple question, i've been searching around and for some reason i cannot find an answer to this. after creating a database/table in Phpmyadmin using xampp, what command do i need to put into my webpage (PHP) to show the table I made? I know the first step involves connecting to the database and i think i've done that properly. This is the code i found from somewhere about connecting (w3 tutorials I believe) $con = mysql_connect("localhost","xxxxxx,""); if (!$con) { die('Could not connect: ' . mysql_error()); }

    Read the article

  • How to authenticate a user using SQL

    - by Tom
    I have an DLL which is constantly connected to a SQL Server instance. The server itself is connected with "admin" permissions, but "normal" users should be able to access the server with their own username and password. The server would still be connected a "admin". I just want to check if the user can access the database. Is there a way to authenticate the user using a SQL query? It would of course be possible to add a encrypted "password" column to a database table for users, but I would prefer not having to do that.

    Read the article

  • how to don't store a repeated field of a Symfony form?

    - by user454760
    Hello everybody, I am working with Symfony 1.4 and Doctrine. I have a model A with a email field. The form of A, display an input in which the user should insert the email correctly. But as everybody know sometiemes they don't do it. To fix this i have insert an extra field in the model (and in the form), called *repeat_email* to prevent the misspellings. Then, in the validation process, after validates all the fields, i use a global validator to compare the data of the two fields. This works, but i don't want to have the email stored two times in the database (i don't want the *repeat_email*). Is there any mechanism to use it in the validation process, but not to store it in the database? Thanks,

    Read the article

  • How do you manually insert options into boost.Program_options?

    - by windfinder
    I have an application that uses Boost.Program_options to store and manage its configuration options. We are currently moving away from configuration files and using database loaded configuration instead. I've written an API that reads configuration options from the database by hostname and instance name. (cool!) However, as far as I can see there is no way to manually insert these options into the boost Program_options. Has anyone used this before, any ideas? The docs from boost seem to indicate the only way to get stuff in that map is by the store function, which either reads from the command line or config file (not what I want). Basically looking for a way to manually insert the DB read values in to the map.

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >