Search Results

Search found 46513 results on 1861 pages for 'database size'.

Page 83/1861 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • browse data in Android SQLite Database

    - by Justin C
    Is there a way for an Android user to browse the SQLite databases on his/her phone and view the data in the databases? I use the SoftTrace beta program a lot. It's great but has no way that I can find to download the data it tracks to a PC. Thanks

    Read the article

  • Detect block size for quota in Linux

    - by Chen Levy
    The limit placed on disk quota in Linux is counted in blocks. However, I found no reliable way to determine the block size. Tutorials I found refer to block size as 512 bytes, and sometimes as 1024 bytes. I got confused reading a post on LinuxForum.org for what a block size really means. So I tried to find that meaning in the context of quota. I found a "Determine the block size on hard disk filesystem for disk quota" tip on NixCraft, that suggested the command: dumpe2fs /dev/sdXN | grep -i 'Block size' or blockdev --getbsz /dev/sdXN But on my system those commands returned 4096, and when I checked the real quota block size on the same system, I got a block size of 1024 bytes. Is there a scriptable way to determine the quota block size on a device, short of creating a known sized file, and checking it's quota usage?

    Read the article

  • Source Controlled Database Data import strategies.

    - by H. Abraham Chavez
    So I've gotten a project and got the db team sold on source control for the db (weird right?) anyway, the db already exists, it is massive, and the application is very dependent on the data. The developers need up to three different flavors of the data to work against when writing SPROCs and so on. Obviously I could script out data inserts. But my question is what tools or strategies do you use to build a db from source control and populate it with multiple large sets of data?

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Auto Reconnect of Database Connection

    - by wesley
    I have a DBCP connection pool in Tomcat. The problem is that when the connection is lost briefly the appliction is broken because DBCP won't try to reconnect again later when there is a connection. Can I get DBCP to reconnect automatically?

    Read the article

  • Database best practices

    - by user270797
    I have a table which stores comments, the comment can either come from another user, or another profile which are separate entities in this app. My original thinking was that the table would have both user_id and profile_id fields, so if a user submits a comment, it gives the user_id leaves the profile_id blank is this right, wrong, is there a better way?

    Read the article

  • mysql database normalization question

    - by Chocho
    here is my 3 tables: table 1 -- stores user information and it has unique data table 2 -- stores place category such as, toronto, ny, london, etc hence this is is also unique table 3 -- has duplicate information. it stores all the places a user have been. the 3 tables are linked or joined by these ids: table 1 has an "table1_id" table 2 has an "table2_id" and "place_name" table 3 has an "table3_id", "table1_id", "place_name" i have an interface where an admin sees all users. beside a user is "edit" button. clicking on that edit button allows you to edit a specific user in a form fields which has a multiple drop down box for "places". if an admin edits a user and add 1 "places" for the user, i insert that information using php. if the admin decides to deselect that 1 "places" do i delete it or mark it as on and off? how about if the admin decides to select 2 "places" for the user; change the first "places" and add an additional "places". will my table just keep growing and will i have just redundant information? thanks.

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • Reverse engineer an ORM

    - by Oren Mazor
    Given a [mysql] database with a given schema, is it possible to generate an ORM interface for it? doesn't matter if its php, python or perl. in other words, I have a database and I have to ask it a few questions. while I could just craft a bunch of SQL queries (okay, several dozen), this is way more interesting to think about. is this a valid question, even? I have no design background with ORMs, but I've used a few (django's, symfony's, etc).

    Read the article

  • Failed to convert a wmv file to mp4 with ffmpeg

    - by Olaf Erlandsen
    i need a help with this command FFMPEG COMMAND: ffmpeg -y -i /input.wmv -vcodec libx264 -acodec libfaac -ac 2 -bufsize 20M -sameq -f mp4 /output.mp4 Output: ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 9 2012 07:04:08 with gcc 4.4.6 (GCC) 20120305 (Red Hat 4.4.6-4) [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from '/input.wmv': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:01:35.10, start: 0.000000, bitrate: 496 kb/s Stream #0:0(spa): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, s16, 64 kb/s Stream #0:1(spa): Video: wmv3 (Main) (WMV3 / 0x33564D57), yuv420p, 320x240, 425 kb/s, SAR 1:1 DAR 4:3, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 0x16c3000] VBV bufsize set but maxrate unspecified, ignored [libx264 @ 0x16c3000] using SAR=1/1 [libx264 @ 0x16c3000] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x16c3000] profile High, level 1.3 [libx264 @ 0x16c3000] 264 - core 128 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Output #0, mp4, to '/output.mp4': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 encoder : Lavf54.29.104 Stream #0:0(spa): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(spa): Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (wmv3 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libfaac) Press [q] to stop, [?] for help [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping frame= 144 fps=0.0 q=29.0 size= 207kB time=00:00:03.38 bitrate= 500.3kbits/s frame= 259 fps=257 q=29.0 size= 447kB time=00:00:07.30 bitrate= 501.3kbits/s frame= 375 fps=248 q=29.0 size= 668kB time=00:00:11.01 bitrate= 496.5kbits/s frame= 487 fps=241 q=29.0 size= 836kB time=00:00:14.85 bitrate= 460.7kbits/s frame= 605 fps=240 q=29.0 size= 1080kB time=00:00:18.92 bitrate= 467.4kbits/s frame= 719 fps=238 q=29.0 size= 1306kB time=00:00:22.80 bitrate= 469.2kbits/s frame= 834 fps=237 q=29.0 size= 1546kB time=00:00:26.52 bitrate= 477.3kbits/s frame= 953 fps=237 q=29.0 size= 1763kB time=00:00:30.27 bitrate= 477.0kbits/s frame= 1071 fps=237 q=29.0 size= 1986kB time=00:00:34.36 bitrate= 473.4kbits/s frame= 1161 fps=231 q=29.0 size= 2160kB time=00:00:37.21 bitrate= 475.4kbits/s frame= 1221 fps=220 q=29.0 size= 2282kB time=00:00:39.53 bitrate= 472.9kbits/s frame= 1280 fps=212 q=29.0 size= 2392kB time=00:00:41.16 bitrate= 476.1kbits/s frame= 1331 fps=203 q=29.0 size= 2502kB time=00:00:43.23 bitrate= 474.1kbits/s frame= 1379 fps=195 q=29.0 size= 2618kB time=00:00:44.72 bitrate= 479.6kbits/s frame= 1430 fps=189 q=29.0 size= 2733kB time=00:00:46.34 bitrate= 483.0kbits/s frame= 1487 fps=184 q=29.0 size= 2851kB time=00:00:48.40 bitrate= 482.6kbits/s frame= 1546 fps=180 q=26.0 size= 2973kB time=00:00:50.43 bitrate= 482.9kbits/s frame= 1610 fps=177 q=29.0 size= 3112kB time=00:00:52.40 bitrate= 486.5kbits/s frame= 1672 fps=174 q=29.0 size= 3231kB time=00:00:54.35 bitrate= 487.0kbits/s frame= 1733 fps=171 q=29.0 size= 3348kB time=00:00:56.51 bitrate= 485.3kbits/s frame= 1792 fps=169 q=29.0 size= 3459kB time=00:00:58.28 bitrate= 486.2kbits/s frame= 1851 fps=166 q=29.0 size= 3588kB time=00:01:00.32 bitrate= 487.2kbits/s frame= 1910 fps=164 q=29.0 size= 3716kB time=00:01:02.36 bitrate= 488.1kbits/s frame= 1972 fps=162 q=29.0 size= 3833kB time=00:01:04.45 bitrate= 487.1kbits/s frame= 2032 fps=161 q=29.0 size= 3946kB time=00:01:06.40 bitrate= 486.8kbits/s frame= 2091 fps=159 q=29.0 size= 4080kB time=00:01:08.35 bitrate= 488.9kbits/s frame= 2150 fps=158 q=29.0 size= 4201kB time=00:01:10.54 bitrate= 487.9kbits/s frame= 2206 fps=156 q=29.0 size= 4315kB time=00:01:12.39 bitrate= 488.3kbits/s frame= 2263 fps=154 q=29.0 size= 4438kB time=00:01:14.21 bitrate= 489.9kbits/s frame= 2327 fps=154 q=29.0 size= 4567kB time=00:01:16.16 bitrate= 491.2kbits/s frame= 2388 fps=152 q=29.0 size= 4666kB time=00:01:18.48 bitrate= 487.0kbits/s frame= 2450 fps=152 q=29.0 size= 4776kB time=00:01:20.24 bitrate= 487.6kbits/s frame= 2511 fps=151 q=29.0 size= 4890kB time=00:01:22.15 bitrate= 487.6kbits/s frame= 2575 fps=150 q=29.0 size= 5015kB time=00:01:24.42 bitrate= 486.6kbits/s frame= 2635 fps=149 q=29.0 size= 5130kB time=00:01:26.62 bitrate= 485.2kbits/s frame= 2695 fps=148 q=29.0 size= 5258kB time=00:01:28.65 bitrate= 485.9kbits/s frame= 2758 fps=147 q=29.0 size= 5382kB time=00:01:30.64 bitrate= 486.4kbits/s frame= 2816 fps=147 q=29.0 size= 5521kB time=00:01:32.69 bitrate= 487.9kbits/s get_buffer() failed Error while decoding stream #0:0: Invalid argument frame= 2848 fps=143 q=-1.0 Lsize= 5787kB time=00:01:35.10 bitrate= 498.4kbits/s video:5099kB audio:581kB subtitle:0 global headers:0kB muxing overhead 1.884230% [libx264 @ 0x16c3000] frame I:12 Avg QP:22.64 size: 12092 [libx264 @ 0x16c3000] frame P:1508 Avg QP:25.39 size: 2933 [libx264 @ 0x16c3000] frame B:1328 Avg QP:30.62 size: 491 [libx264 @ 0x16c3000] consecutive B-frames: 10.0% 80.8% 8.1% 1.1% [libx264 @ 0x16c3000] mb I I16..4: 1.8% 72.1% 26.0% [libx264 @ 0x16c3000] mb P I16..4: 0.4% 2.4% 0.3% P16..4: 48.3% 19.6% 19.3% 0.0% 0.0% skip: 9.5% [libx264 @ 0x16c3000] mb B I16..4: 0.1% 0.2% 0.0% B16..8: 52.6% 6.6% 2.3% direct: 1.4% skip:36.8% L0:48.8% L1:42.5% BI: 8.7% [libx264 @ 0x16c3000] 8x8 transform intra:75.3% inter:55.4% [libx264 @ 0x16c3000] coded y,uvDC,uvAC intra: 77.9% 81.7% 33.1% inter: 24.2% 11.6% 1.1% [libx264 @ 0x16c3000] i16 v,h,dc,p: 25% 16% 44% 14% [libx264 @ 0x16c3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 15% 29% 6% 5% 6% 6% 7% 7% [libx264 @ 0x16c3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 15% 17% 7% 9% 8% 9% 7% 7% [libx264 @ 0x16c3000] i8c dc,h,v,p: 50% 19% 24% 7% [libx264 @ 0x16c3000] Weighted P-Frames: Y:3.8% UV:1.1% [libx264 @ 0x16c3000] ref P L0: 75.6% 19.1% 4.2% 1.0% 0.1% [libx264 @ 0x16c3000] ref B L0: 98.1% 1.9% 0.0% [libx264 @ 0x16c3000] ref B L1: 98.9% 1.1% [libx264 @ 0x16c3000] kb/s:439.47 FFMPEG Configuration: --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvpx --enable-libfaac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-gpl --enable-postproc --enable-nonfree libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 59.100 / 54. 59.100 libavformat 54. 29.104 / 54. 29.104 libavdevice 54. 2.101 / 54. 2.101 libavfilter 3. 17.100 / 3. 17.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 PROBLEM #1: [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping PROBLEM #2: get_buffer() failed Error while decoding stream #0:0: Invalid argument

    Read the article

  • Getting-started: Setup Database for Node.js

    - by Emile Petrone
    I am new to node.js but am excited to try it out. I am using Express as a web framework, and Jade as a template engine. Both were easy to get setup following this tutorial from Node Camp. However the one problem I am finding is I can't find a simple tutorial for getting a DB set up. I am trying to build a basic chat application (store session and message). Does anyone know of a good tutorial? This other SO post talks about dbs to use- but as this is very different from the Django/MySQL world I've been in, I want to make sure I understand what is going on. Thanks!

    Read the article

  • Database Design Composite Keys

    - by guazz
    I am going to use a contrived example: one headquarter has one-or-many contacts. A contact can only belong to one headquarter. TableName = Headquarter Column 0 = Id : Guid [PK] Column 1 = Name : nvarchar(100) Column 2 = IsAnotherAttribute: bool TableName = ContactInformation Column 0 = Id : Guid [PK] Column 1 = HeadquarterId: Guid [FK] Column 2 = AddressLine1 COlumn 3 = AddressLine2 Column 4 = AddressLine3 I would like some help setting the table primary keys and foreign keys here? How does the above look? Should I use a composite key for ContactInformation on [Column 0 and Column1]? Is it ok to use surrogate key all of the time?

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • close fails on database connections (managed connection cleanup fails) in websphere 7 but not in web

    - by mete
    I have a simple method (used in a web application through servlets) that gets a connection from a JNDI name and issues a select statement (get connection, issue select, return result, close the connection etc. in finally). Due to other methods in the application the connection is set as autocommit=false. This method works normally in websphere 6.1 as well as in glassfish and weblogic. However, in websphere 7, it receives cleanup failed error when I close the connection because, it says, the connection is still in a transaction. Because I was not updating anything I did not commit or rollback the connection in this method (which can be wrong). If I add commit before closing the connection, it works. My question is why it works in websphere 6.1 (and other containers) and why not in websphere 7 ? What can be the cause of this difference ?

    Read the article

  • Does a version control database storage engine exist?

    - by Zak
    I was just wondering if a storage engine type existed that allowed you to do version control on row level contents. For instance, if I have a simple table with ID, name, value, and ID is the PK, I could see that row 354 started as (354, "zak", "test")v1 then was updated to be (354, "zak", "this is version 2 of the value")v2 , and could see a change history on the row with something like select history (value) where ID = 354. It's kind of an esoteric thing, but it would beat having to keep writing these separate history tables and functions every time a change is made...

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \server, and it's listening on port 1234. When \client tries to connect to \server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \server:1234\tables, I get error 6097, bad IP address specified in the connection path. \server is pingable from \client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron.

    Read the article

  • mysql partitioning

    - by Yang
    just want to verify that database partition is implemented only at the database level, when we query a partitioned table, we still do our normal query, nothing special with our queries, the optimization is performed automatically when parsing the query, is that correct? e.g. we have a table called 'address' with a column called 'country_code' and 'city'. so if i want to get all the addresses in New York, US, normally i wound do something like this: select * from address where country_code = 'US' and city = 'New York' if now the table is partitioned by 'country_code', and i know that now the query will only be executed on the partition which contains country_code = US. My question is do I need to explicitly specify the partition to query in my sql statement? or i still use the previous statement and the db server will optimize it automatically? Thanks in advance!

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Tag, comment, rating, etc. database design

    - by Efe Kaptan
    I want to implement modules such as comment, rating, tag, etc. to my entities. What I thought was: comments_table - comment_id, comment_text entity1 - entitity1_id, entity1_text entity2 - entitity2_id, entity2_text entity1_comments - entity1_id, comment_id entity2_comments - entity2_id, comment_id .... Is this approach correct?

    Read the article

  • Facebook database design?

    - by Marin
    I have always wondered how Facebook designed the friend <- user relation. I figure the user table is something like this: user_email PK user_id PK password I figure the table with user's data (sex, age etc connected via user email I would assume). How does it connect all the friends to this user? Something like this? user_id friend_id_1 friend_id_2 friend_id_3 friend_id_N Probably not. Because the number of users is unknown and will expand.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >