Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 191/429 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Cannot send HTML Emails

    - by Zen Savona
    Well I'm trying to send a HTML email using gmail smtp from CI, and it seems to reject my emails when they have any amount of tables. No error is given, they just do not appear in my inbox. If I send an email with light HTML and no tables, they go through. Anyone have any insight? $config = Array( 'protocol' => 'smtp', 'smtp_host' => 'ssl://smtp.googlemail.com', 'smtp_port' => 465, 'smtp_user' => '[email protected]', 'smtp_pass' => '--------', 'mailtype' => 'html', 'charset' => 'iso-8859-1' ); $this->load->library('email', $config); $this->email->set_newline("\r\n"); $this->email->from('myEmail', 'myName'); $this->email->to($this->input->post('email')); $this->email->subject('mySubject'); $msg = $this->load->view('partials/email', '', true); $this->email->message($msg); $this->email->send();`

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • can someone help me fix my code?

    - by user267490
    Hi, I have this code I been working on but I'm having a hard time for it to work. I did one but it only works in php 5.3 and I realized my host only supports php 5.0! do I was trying to see if I could get it to work on my sever correctly, I'm just lost and tired lol <?php //Temporarily turn on error reporting @ini_set('display_errors', 1); error_reporting(E_ALL); // Set default timezone (New PHP versions complain without this!) date_default_timezone_set("GMT"); // Common set_time_limit(0); require_once('dbc.php'); require_once('sessions.php'); page_protect(); // Image settings define('IMG_FIELD_NAME', 'cons_image'); // Max upload size in bytes (for form) define ('MAX_SIZE_IN_BYTES', '512000'); // Width and height for the thumbnail define ('THUMB_WIDTH', '150'); define ('THUMB_HEIGHT', '150'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>whatever</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text\css"> .validationerrorText { color:red; font-size:85%; font-weight:bold; } </style> </head> <body> <h1>Change image</h1> <?php $errors = array(); // Process form if (isset($_POST['submit'])) { // Get filename $filename = stripslashes($_FILES['cons_image']['name']); // Validation of image file upload $allowedFileTypes = array('image/gif', 'image/jpg', 'image/jpeg', 'image/png'); if ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_NO_FILE) { $errors['img_empty'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['type'] != '') && (!in_array($_FILES[IMG_FIELD_NAME]['type'], $allowedFileTypes))) { $errors['img_type'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_INI_SIZE) || ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_FORM_SIZE) || ($_FILES[IMG_FIELD_NAME]['size'] > MAX_SIZE_IN_BYTES)) { $errors['img_size'] = true; } elseif ($_FILES[IMG_FIELD_NAME]['error'] != UPLOAD_ERR_OK) { $errors['img_error'] = true; } elseif (strlen($_FILES[IMG_FIELD_NAME]['name']) > 200) { $errors['img_nametoolong'] = true; } elseif ( (file_exists("\\uploads\\{$username}\\images\\banner\\{$filename}")) || (file_exists("\\uploads\\{$username}\\images\\banner\\thumbs\\{$filename}")) ) { $errors['img_fileexists'] = true; } if (! empty($errors)) { unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } // Create thumbnail if (empty($errors)) { // Make directory if it doesn't exist if (!is_dir("\\uploads\\{$username}\\images\\banner\\thumbs\\")) { // Take directory and break it down into folders $dir = "uploads\\{$username}\\images\\banner\\thumbs"; $folders = explode("\\", $dir); // Create directory, adding folders as necessary as we go (ignore mkdir() errors, we'll check existance of full dir in a sec) $dirTmp = ''; foreach ($folders as $fldr) { if ($dirTmp != '') { $dirTmp .= "\\"; } $dirTmp .= $fldr; mkdir("\\".$dirTmp); //ignoring errors deliberately! } // Check again whether it exists if (!is_dir("\\uploads\\$username\\images\\banner\\thumbs\\")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } } if (empty($errors)) { // Move uploaded file to final destination if (! move_uploaded_file($_FILES[IMG_FIELD_NAME]['tmp_name'], "/uploads/$username/images/banner/$filename")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } else { // Create thumbnail in new dir if (! make_thumb("/uploads/$username/images/banner/$filename", "/uploads/$username/images/banner/thumbs/$filename")) { $errors['thumb'] = true; unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file } } } } // Record in database if (empty($errors)) { // Find existing record and delete existing images $sql = "SELECT `bannerORIGINAL`, `bannerTHUMB` FROM `agent_settings` WHERE (`agent_id`={$user_id}) LIMIT 1"; $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } $numResults = mysql_num_rows($result); if ($numResults == 1) { $row = mysql_fetch_assoc($result); // Delete old files unlink("/uploads/$username/images/banner/" . $row['bannerORIGINAL']); //delete OLD source file unlink("/uploads/$username/images/banner/thumbs/" . $row['bannerTHUMB']); //delete OLD thumbnail file } // Update/create record with new images if ($numResults == 1) { $sql = "INSERT INTO `agent_settings` (`agent_id`, `bannerORIGINAL`, `bannerTHUMB`) VALUES ({$user_id}, '/uploads/$username/images/banner/$filename', '/uploads/$username/images/banner/thumbs/$filename')"; } else { $sql = "UPDATE `agent_settings` SET `bannerORIGINAL`='/uploads/$username/images/banner/$filename', `bannerTHUMB`='/uploads/$username/images/banner/thumbs/$filename' WHERE (`agent_id`={$user_id})"; } $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } } // Print success message and how the thumbnail image created if (empty($errors)) { echo "<p>Thumbnail created Successfully!</p>\n"; echo "<img src=\"/uploads/$username/images/banner/thumbs/$filename\" alt=\"New image thumbnail\" />\n"; echo "<br />\n"; } } if (isset($errors['move_source'])) { echo "\t\t<div>Error: Failure occurred moving uploaded source image!</div>\n"; } if (isset($errors['thumb'])) { echo "\t\t<div>Error: Failure occurred creating thumbnail!</div>\n"; } ?> <form action="" enctype="multipart/form-data" method="post"> <input type="hidden" name="MAX_FILE_SIZE" value="<?php echo MAX_SIZE_IN_BYTES; ?>" /> <label for="<?php echo IMG_FIELD_NAME; ?>">Image:</label> <input type="file" name="<?php echo IMG_FIELD_NAME; ?>" id="<?php echo IMG_FIELD_NAME; ?>" /> <?php if (isset($errors['img_empty'])) { echo "\t\t<div class=\"validationerrorText\">Required!</div>\n"; } if (isset($errors['img_type'])) { echo "\t\t<div class=\"validationerrorText\">File type not allowed! GIF/JPEG/PNG only!</div>\n"; } if (isset($errors['img_size'])) { echo "\t\t<div class=\"validationerrorText\">File size too large! Maximum size should be " . MAX_SIZE_IN_BYTES . "bytes!</div>\n"; } if (isset($errors['img_error'])) { echo "\t\t<div class=\"validationerrorText\">File upload error occured! Error code: {$_FILES[IMG_FIELD_NAME]['error']}</div>\n"; } if (isset($errors['img_nametoolong'])) { echo "\t\t<div class=\"validationerrorText\">Filename too long! 200 Chars max!</div>\n"; } if (isset($errors['img_fileexists'])) { echo "\t\t<div class=\"validationerrorText\">An image file already exists with that name!</div>\n"; } ?> <br /><input type="submit" name="submit" id="image1" value="Upload image" /> </form> </body> </html> <?php ################################# # # F U N C T I O N S # ################################# /* * Function: make_thumb * * Creates the thumbnail image from the uploaded image * the resize will be done considering the width and * height defined, but without deforming the image * * @param $sourceFile Path anf filename of source image * @param $destFile Path and filename to save thumbnail as * @param $new_w the new width to use * @param $new_h the new height to use */ function make_thumb($sourceFile, $destFile, $new_w=false, $new_h=false) { if ($new_w === false) { $new_w = THUMB_WIDTH; } if ($new_h === false) { $new_h = THUMB_HEIGHT; } // Get image extension $ext = strtolower(getExtension($sourceFile)); // Copy source switch($ext) { case 'jpg': case 'jpeg': $src_img = imagecreatefromjpeg($sourceFile); break; case 'png': $src_img = imagecreatefrompng($sourceFile); break; case 'gif': $src_img = imagecreatefromgif($sourceFile); break; default: return false; } if (!$src_img) { return false; } // Get dimmensions of the source image $old_x = imageSX($src_img); $old_y = imageSY($src_img); // Calculate the new dimmensions for the thumbnail image // 1. calculate the ratio by dividing the old dimmensions with the new ones // 2. if the ratio for the width is higher, the width will remain the one define in WIDTH variable // and the height will be calculated so the image ratio will not change // 3. otherwise we will use the height ratio for the image // as a result, only one of the dimmensions will be from the fixed ones $ratio1 = $old_x / $new_w; $ratio2 = $old_y / $new_h; if ($ratio1 > $ratio2) { $thumb_w = $new_w; $thumb_h = $old_y / $ratio1; } else { $thumb_h = $new_h; $thumb_w = $old_x / $ratio2; } // Create a new image with the new dimmensions $dst_img = ImageCreateTrueColor($thumb_w, $thumb_h); // Resize the big image to the new created one imagecopyresampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y); // Output the created image to the file. Now we will have the thumbnail into the file named by $filename switch($ext) { case 'jpg': case 'jpeg': $result = imagepng($dst_img, $destFile); break; case 'png': $result = imagegif($dst_img, $destFile); break; case 'gif': $result = imagejpeg($dst_img, $destFile); break; default: //should never occur! } if (!$result) { return false; } // Destroy source and destination images imagedestroy($dst_img); imagedestroy($src_img); return true; } /* * Function: getExtension * * Returns the file extension from a given filename/path * * @param $str the filename to get the extension from */ function getExtension($str) { return pathinfo($str, PATHINFO_EXTENSION); } ?>

    Read the article

  • SQL server 2008 trigger not working correct with multiple inserts

    - by Rob
    I've got the following trigger; CREATE TRIGGER trFLightAndDestination ON checkin_flight AFTER INSERT,UPDATE AS BEGIN IF NOT EXISTS ( SELECT 1 FROM Flight v INNER JOIN Inserted AS i ON i.flightnumber = v.flightnumber INNER JOIN checkin_destination AS ib ON ib.airport = v.airport INNER JOIN checkin_company AS im ON im.company = v.company WHERE i.desk = ib.desk AND i.desk = im.desk ) BEGIN RAISERROR('This combination of of flight and check-in desk is not possible',16,1) ROLLBACK TRAN END END What i want the trigger to do is to check the tables Flight, checkin_destination and checkin_company when a new record for checkin_flight is added. Every record of checkin_flight contains a flightnumber and desknumber where passengers need to check in for this destination. The tables checkin_destination and checkin_company contain information about companies and destinations restricted to certain checkin desks. When adding a record to checkin_flight i need information from the flight table to get the destination and flightcompany with the inserted flightnumber. This information needs to be checked against the available checkin combinations for flights, destinations and companies. I'm using the trigger as stated above, but when i try to insert a wrong combination the trigger allows it. What am i missing here?

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • What is a good approach to preloading data?

    - by Bob Horn
    Are there best practices out there for loading data into a database, to be used with a new installation of an application? For example, for application foo to run, it needs some basic data before it can even be started. I've used a couple options in the past: TSQL for every row that needs to be preloaded: IF NOT EXISTS (SELECT * FROM Master.Site WHERE Name = @SiteName) INSERT INTO [Master].[Site] ([EnterpriseID], [Name], [LastModifiedTime], [LastModifiedUser]) VALUES (@EnterpriseId, @SiteName, GETDATE(), @LastModifiedUser) Another option is a spreadsheet. Each tab represents a table, and data is entered into the spreadsheet as we realize we need it. Then, a program can read this spreadsheet and populate the DB. There are complicating factors, including the relationships between tables. So, it's not as simple as loading tables by themselves. For example, if we create Security.Member rows, then we want to add those members to Security.Role, we need a way of maintaining that relationship. Another factor is that not all databases will be missing this data. Some locations will already have most of the data, and others (that may be new locations around the world), will start from scratch. Any ideas are appreciated.

    Read the article

  • PHP driven site needs password change.

    - by Drea
    I have inherited a website that needs the password changed that accesses the database. I can see that there are two tables within the database but neither of them have username or password info. The previous web guy moved out of the country and can't be reached. I am not up-to-speed enough to figure this out. I have gone through all the files to try and find the answer but can't get it. It's hosted by goDaddy.com and I have changed the passwords there but it didn't change this login info. www.executivehomerents.com/cpanel <-this brings up the prompt for the username & password which I won't give out but the page only gives you 5 choices and none of them deal with changing the password. They are simply to change the data in the tables. If you go to: http://www.websitedatabases.com/ <=this is the company that the PHPMagic program was purchased from-- They have no contact number. Here is another page that might help: http://www.executivehomerents.com/cpanel/dbinput/setup.php I don't think I'll get an answer to this but it's worth a try... thanks.

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • Error No 150 mySQL

    - by buddhamagnet
    Hi all. This is making me sweat - I am getting error 150 when I try and create a table in mySQL. I've scoured the forums to no avail. The statement uses foreign key constraints - both tables are InnoDB, all relevant columns have the same data type and both tables have the same charset and collation. Here's the CREATE TABLE and the original CREATE TABLE statement for the table that's being referenced. Any ideas? New table: CREATE TABLE `approval` ( `rev_id` int(10) UNSIGNED NOT NULL, `rev_page` int(10) UNSIGNED NOT NULL, `user_id` int(10) UNSIGNED NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`rev_id`,`rev_page`,`user_id`), KEY `FK_approval_user` (`user_id`), CONSTRAINT `FK_approval_revision` FOREIGN KEY (`rev_id`, `rev_page`) REFERENCES `revision` (`rev_id`, `rev_page`), CONSTRAINT `FK_approval_user` FOREIGN KEY (`user_id`) REFERENCES `user` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Referenced table: CREATE TABLE `revision` ( `rev_id` int(10) unsigned NOT NULL auto_increment, `rev_page` int(10) unsigned NOT NULL, `rev_text_id` int(10) unsigned NOT NULL, `rev_comment` tinyblob NOT NULL, `rev_user` int(10) unsigned NOT NULL default '0', `rev_user_text` varbinary(255) NOT NULL default '', `rev_timestamp` binary(14) NOT NULL default '\0\0\0\0\0\0\0\0\0\0\0\0\0\0', `rev_minor_edit` tinyint(3) unsigned NOT NULL default '0', `rev_deleted` tinyint(3) unsigned NOT NULL default '0', `rev_len` int(10) unsigned default NULL, `rev_parent_id` int(10) unsigned default NULL, PRIMARY KEY (`rev_id`), UNIQUE KEY `rev_page_id` (`rev_page`,`rev_id`), KEY `rev_timestamp` (`rev_timestamp`), KEY `page_timestamp` (`rev_page`,`rev_timestamp`), KEY `user_timestamp` (`rev_user`,`rev_timestamp`), KEY `usertext_timestamp` (`rev_user_text`,`rev_timestamp`) ) ENGINE=InnoDB AUTO_INCREMENT=4904 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=1024;

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • How can I manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Why represent shopping carts and order invoices differently in a domain model?

    - by Todd
    I've built some shopping cart systems in the past, but I always designed them such that the final order invoice is just a shopping cart that has been marked as "purchased". All the logic for adding/removing/changing items in a cart is also the logic for the order. All data is stored in the same tables in the database. But it seems this is not the proper way to design an e-commerce site.. Can someone explain the benefit of separating the shopping cart from invoices in the domain model? It seems to me this would lead to a lot of duplicated code, an extra set of tables in the database, and make it harder to maintain in the event the system need to start accommodating more complicated orders (like specifying selected options for an item which may or may not change the price/availability/shipping time of the order). I'm assuming I just haven't seen the light, as every book and other example I see seems to separate these two seemingly similar concerns -- but I can't find any explanation as to the benefit of doing such! It's also the case in the systems that I design that changes are often made after the initial order is confirmed. It's not uncommon for items to be removed, replaced, or added afterwards (but prior to fulfillment).

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • C#: Using one Data Table in order to fill 2 different comboboxes?

    - by odiseh
    Hi all. I have 2 comboboxes on a form (form1) called combobox1 and combobox2. Each comboboxes should be filled with data stored in 2 different tables in Sql server 2005: table1 and table2 I mean: combobox1 -- table1 combobox2 -- table2 I fill data table with proper data and then bind the comboboxes to it separately. My problem is: after filling 2 combos, both of them have equal data got from table2. This is my code: DataTable tb1 = new DataTable(); //Filling tb1 with data got from table1 combobox1.Items.Clear(); combobox1.DataSource = tb1; combobox1.DisplayMember = "Name"; combobox1.ValueMember = "ID"; combobox1.SelectedIndex = -1; //filling tb1 with data got from table2 combobox2.Items.Clear(); combobox2.DataSource = tb1; combobox2.DisplayMember = "Name"; combobox2.ValueMember = "ID"; combobox2.SelectedIndex = -1; What's wrong? It seems that if I get 2 different data tables (tb1 and tb2) , every thing will be all right. Any suggestions please. Thank you

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >