Search Results

Search found 34962 results on 1399 pages for 'drop database'.

Page 290/1399 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • why is drop box syncing so freakishly slowly in my linux virtual machine?

    - by Bec
    i am setting up a linux virtual machine (windows 7 64 bit host, ubuntu 64 bit guest, using virtual box) and i just installed drop box and set it to sync. I've only got about 2Gb in there so i figured it should take just an afternoon, but it's going at about 0.5 kB/second and says it will take about 60 days. I usually get about 200 kB/second in the host OS, and downloading straight from the dropbox website through firefox in the ubuntu VM i get about that, but sync is really slow. any tips?

    Read the article

  • How to get cells to default to zero or calculate additonal fees, based on selection from a drop-down list

    - by User300479
    I am building a Pay Rate Calculator worksheet with a Flat/Base pay rate & numerous Overtime pay rates. I would like to be able to have the "Overtime" pay rate cells to change depending on my selection from my drop-down list. My list selections are "Flat Rate" and "Compounding". 1) If I select "Flat Rate" how can I make all the "Overtime" cell rates and totals default to zero or calculate to zero, to show the user there is no overtime rates to be applied to this job and to use the one rate to pay? 2)And if I select "Compounding" the Overtime rate cells are updated to add/include additional fees, to show the user Overtime rates apply to the job and penalties have automatically been calculated on top for them. Please explain like I'm a 2 year old - learning as I go. Many thanks :)

    Read the article

  • Why is execution of batch files different between drag & drop and from command line?

    - by Dharma Leonardi
    Ok, so I've been trying to figure this out for hours with no progress. I have created a batch file to get details of a VHD. Everything runs fine and produces the expected results when run from the command line in a command prompt. However, when I use drag and drop from file explorer (dragging a vhd file and dropping onto the batch file) the batch file runs without errors but the output (VHD.INFO) is empty. I'm stumped. Edited to only include the behaviour: @echo off cls setlocal enabledelayedexpansion set "_PATH.THIS=%~dp0" echo HELP | diskpart > %_PATH.THIS%OUTPUT.TMP TYPE %_PATH.THIS%OUTPUT.TMP PAUSE To demonstrate the different behaviour, please run the batch file from the command line once (works) and also run the batch file by double clicking in file explorer (failure in all piping commands).

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • Web Seminar - The Oracle Database Appliance: How to Sell a Unique Product!

    - by swalker
    Dear partner, You are exclusively invited to join us for a webcast, dedicated to Oracle’s EMEA Partners, on the Oracle Database Appliance value proposition, positioning and ecosystem – to help you capture new business and help your customers roll out their solutions fast, easily, safely and with maximum cost efficiency! Join us to learn about: ODA Benefits: Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins: What can we Learn? Objection Handling: Overcoming the most common customer questions Going beyond the Database: The ODA ECO System for applications, backup & more… When combined with your high-value services (e.g., migration, consolidation), the end result is a database system that you can use to grow the business in your existing accounts, or capture new business. Join us at the EMEA partner webcast hosted by Robert Van Espelo Cloud and Virtualization Leader, EMEA Business Development on Thursday, April 12, at 9:00am UK / 10:00am CET. The presentation will be given in English. To register for this webcast click here We look forward to talking to you on April 12! Best regards,Giuseppe Facchetti EMEA Partner Business Development Manager Oracle EMEA, Hardware Sales Paul LeonardEMEA Partner Marketing Manager Oracle EMEA, Systems Marketing

    Read the article

  • How can I design my classes to include calendar events stored in a database?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • Does a Postgresql dump create sequences that start with - or after - the last key?

    - by bennylope
    I recently created a SQL dump of a database behind a Django project, and after cleaning the SQL up a little bit was able to restore the DB and all of the data. The problem was the sequences were all mucked up. I tried adding a new user and generated the Python error IntegrityError: duplicate key violates unique constraint. Naturally I figured my SQL dump didn't restart the sequence. But it did: DROP SEQUENCE "auth_user_id_seq" CASCADE; CREATE SEQUENCE "auth_user_id_seq" INCREMENT 1 START 446 MAXVALUE 9223372036854775807 MINVALUE 1 CACHE 1; ALTER TABLE "auth_user_id_seq" OWNER TO "db_user"; I figured out that a repeated attempt at creating a user (or any new row in any table with existing data and such a sequence) allowed for successful object/row creation. That solved the pressing problem. But given that the last user ID in that table was 446 - the same start value in the sequence creation above - it looks like Postgresql was simply trying to start creating rows with that key. Does the SQL dump provide the wrong start key by 1? Or should I invoke some other command to start sequences after the given start ID? Keenly curious.

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • Why does this simple MySQL procedure take way too long to complete?

    - by Howard Guo
    This is a very simple MySQL stored procedure. Cursor "commission" has only 3000 records, but the procedure call takes more than 30 seconds to run. Why is that? DELIMITER // DROP PROCEDURE IF EXISTS apply_credit// CREATE PROCEDURE apply_credit() BEGIN DECLARE done tinyint DEFAULT 0; DECLARE _pk_id INT; DECLARE _eid, _source VARCHAR(255); DECLARE _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count DECIMAL; DECLARE commission CURSOR FOR SELECT pk_id, eid, source, lh_revenue, acc_revenue, project_carrier_expense, carrier_lh, carrier_acc, gross_margin, fsc_revenue, revenue, load_count FROM ct_sales_commission; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; DELETE FROM debug; OPEN commission; REPEAT FETCH commission INTO _pk_id, _eid, _source, _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count; INSERT INTO debug VALUES(concat('row ', _pk_id)); UNTIL done = 1 END REPEAT; CLOSE commission; END// DELIMITER ; CALL apply_credit(); SELECT * FROM debug;

    Read the article

  • How to load the SQL data into several ComboBox easily, am i doing the correctly or is there another way

    - by Dominic Deepan.d
    I have a Combobox to fill the data for City, State and PinCode these combobox is dopdown list and the user will pick it. and it loads once the form opens. Here is the CODE: /// CODE TO BRING A DATA FROM SQL INTO THE FORM DROP LIST /// To fill the sates from States Table cn = new SqlConnection(@"Data Source=Nick-PC\SQLEXPRESS;Initial Catalog=AutoDB;Integrated Security=True"); cmd= new SqlCommand("select * from TblState",cn); cn.Open(); SqlDataReader dr; try { dr = cmd.ExecuteReader(); while (dr.Read()) { SelectState.Items.Add(dr["State"].ToString()); } } catch (Exception ex) { MessageBox.Show(ex.Message); } finally { cn.Close(); } //To fill the Cities from City Table cn1 = new SqlConnection(@"Data Source=Nick-PC\SQLEXPRESS;Initial Catalog=AutoDB;Integrated Security=True"); cmd1 = new SqlCommand("SELECT * FROM TblCity", cn); cn.Open(); SqlDataReader ds; try { ds = cmd1.ExecuteReader(); while (ds.Read()) { SelectCity.Items.Add(ds["City"].ToString()); } } catch (Exception ex) { MessageBox.Show(ex.Message); } finally { cn1.Close(); } // To fill the Data in the Pincode from the City Table cn2 = new SqlConnection(@"Data Source=Nick-PC\SQLEXPRESS;Initial Catalog=AutoDB;Integrated Security=True"); cmd2 = new SqlCommand("SELECT (Pincode) FROM TblCity ", cn2); cn2.Open(); SqlDataReader dm; try { dm = cmd2.ExecuteReader(); while (dm.Read()) { SelectPinCode.Items.Add(dm["Pincode"].ToString()); } } catch (Exception ex) { MessageBox.Show(ex.Message); } finally { cn2.Close(); } its kinda Big, i am doing the same steps for all the combo-box, but is there a way i can merge it in a simple way.

    Read the article

  • Pay online service

    - by Samuel
    Hellow, I have a database where you can select articles etc, users have an account, it's all in mysql and php (i guess you don't need that code). What i was wondering was how to write a script that allows users to pay online for the articles they selected? It doesn't need to be any code, just ideas / hints / tips / ... (that are doable in PHP or something similar) Thanks in advance!! -Samuel

    Read the article

  • "Create table if not exists" - how to check the schema, too?

    - by Joonas Pulakka
    Is there a (more or less) standard way to check not only whether a table named mytable exists, but also whether its schema is similar to what it should be? I'm experimenting with H2 database, and CREATE TABLE IF NOT EXISTS mytable (....) statements apparently only check for the table´s name. I would expect to get an exception if there's a table with the given name, but different schema.

    Read the article

  • SQLCODE -1390 connecting to DB2 64 bit client from 32 bit app

    - by Oliver Abraham
    Hi there, I've got a 32 bit application that connects normally to a DB2 database. (written in C) When I run it on a machine with a DB2 64 bit client, I get a SQLCODE -1390 from connect. (Win7 64 Bit, DB2 V9.7 client 64 bit) Connecting from the command line works (db2 connect to ...) With a 32 Bit DB2 client on the same Win7 64 Bit machine, the connect also works. Does anyone has an idea how to fix it ? Best regards Oliver

    Read the article

  • Oracle TNS names not showing when adding new connection to sqldeveloper

    - by Americus
    Hello, I'm trying to connect to an oracle database with sqldeveloper. I've installed the .Net oracle drivers and placed the tnsnames.ora file at C:\Oracle\product\11.1.0\client_1\Network\Admin. I'm using the following format in tnsnames.ora. dev = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.XXX.XXX)(PORT = XXXX)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = idpdev2) ) ) In sqldeveloper, when I try to create a new connection, no tns names show up as options. Is there something I'm missing?

    Read the article

  • Many-to-many relations in RDBMS databases

    - by Industrial
    What is the best way of handling many-to-many relations in a RDBMS database like mySQL? Have tried using a pivot table to keep track of the relationships, but it leads to either one of the following: Normalization gets left behind Columns that is empty or null What approach have you taken in order to support many-to-many relationships?

    Read the article

  • Star Schema vs Snowflake Schema performance

    - by Megawolt
    Hi... I'm begin to developing a scial sharing website so I'm curious about database design Schema... So in Data-Mining Star-Schema is the best one but how about a social sharing website... And as a nature of the SS websites there will be (i hope :)) many users in same time... Which better for performance for overdose using...

    Read the article

  • What’s the strategy to recover data during DAO unit test?

    - by Michael Lu
    When I test DAO module in JUnit, an obvious problem is: how to recover testing data in database? For instance, a record should be deleted in both test methods testA() and testB(), that means precondition of both test methods need an existing record to be deleted. Then my strategy is inserting the record in setUp() method to recover data. What’s your better solution? Or your practical idea in such case? Thanks

    Read the article

  • Using Amazon's EBS for MySQL hot backup

    - by flybywire
    What are your experiences using Amazons EBS snapshot features for MySql hot backups. I have a database running a batch processing job in ec2. I backup with EBS snapshot. So far the backups looks consistent. But I am afraid they "will stop being consistent as soon as I stop checking" (Uncertainty principle). What are your experiences with backuping relational databases (and mysql in particular) with ebs snapshot?

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >