Search Results

Search found 70568 results on 2823 pages for 'ef database first'.

Page 35/2823 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Storing n-grams in database in < n number of tables.

    - by kurige
    If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables. Like so: == 1-gram table == Token | NextWord | Frequency ------+----------+----------- "I" | "like" | 15 "I" | "hate" | 20 == 2-gram table == Token | NextWord | Frequency ---------+------------+----------- "I like" | "apples" | 8 "I like" | "tomatoes" | 12 "I hate" | "tomatoes" | 20 "I hate" | "apples" | 2 Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes". However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram. What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • hosting site on one web server and accessing a database on another

    - by tombull89
    I'm fairly sure this is the right place for this question, but if not please move it to the the right site. I have a number of sites on a 1and1 package (yeah, I know...) and I also have a subdomian that belongs to a college tutor. I have been playing around with PHP and MySQL databases on the subdomian site and would like to know if it is possible to run a database driven (i.e. blog) on one of my 1and1 sites. I could upgrade my package but if I'm only going to gain database functionality I'm not sure if I want to do it. Also, as 1and1 don't use cPanel I'm wondering how the databases would be managed...but I'll worry about that when (if) the time comes. Cheers!

    Read the article

  • Writing datatable to database file, one record at a time

    - by Kevin
    I want to write a C# program that will read a row from a datatable (named loadDT) and update a database file (named Forecasts.mdb). My datatable looks like this (each day's value is a number representing kilowatts usage forecast): Hour Day1 Day2 Day3 Day4 Day5 Day6 Day7 1 519 520 524 498 501 476 451 My database file looks like this: Day Hour KWForecast 1 1 519 2 1 520 3 1 524 ... and so on. Basically, I want to be able to read one row from the datatable, and then extrapolate that out to my database file, one record at a time. Each row from the datatable will result in seven records written to the database file. Any ideas on how to go about this? I can connect to my database, the connection string works, and I can update and delete from the database. I just can't wrap my head around how to do this one record at a time.

    Read the article

  • Writing datatable to database file, one record at a time in C#

    - by Kevin
    Hi! I want to write a C# program that will read a row from a datatable (named loadDT) and update a database file (named Forecasts.mdb). My datatable looks like this (each day's value is a number representing kilowatts usage forecast): Hour Day1 Day2 Day3 Day4 Day5 Day6 Day7 1 519 520 524 498 501 476 451 My database file looks like this: Day Hour KWForecast 1 1 519 2 1 520 3 1 524 ... and so on. Basically, I want to be able to read one row from the datatable, and then extrapolate that out to my database file, one record at a time. Each row from the datatable will result in seven records written to the database file. Any ideas on how to go about this? I can connect to my database, the connection string works, and I can update and delete from the database. I just can't wrap my head around how to do this one record at a time. Thanks in advance for any help and advice.

    Read the article

  • First Come, First Served process scheduling

    - by user253530
    i have 4 processes: p1 - bursts 5, priority: 3 p2 - bursts 8, priority: 2 p3 - bursts 12, priority: 2 p4 - bursts 6, priority: 1 Assuming that all processes arrive at the scheduler at the same time what is the average response time and average turnaround time? For FCFS is it ok to have them in the order p1, p2, p3, p4 in the execution queue?

    Read the article

  • Connect to Oracle database on a different server from PHP

    - by macha
    Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server. Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server. Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?

    Read the article

  • How to structure a database with questions and answers?

    - by Andreas Johannessen
    Hi I am going to make a simple application that uses a database. I could need some guidance on how to structure it. I shall make question program. What I have in mind is. One table with questions One table with the difficulity of the question One table with the category of the question However, what do I do with the answers? Have them as seperate columns in the question-table? It sounds like a bad practice.(Also, where do I have the correct answer) Each question will have 5 answers where only one of them is correct.

    Read the article

  • Get Rails to save a record to the database in a non-UTC time

    - by Shaun
    Is there a way to get Rails to save records to the database without it automagically converting the timestamp into UTC before saving? The problem is that I have a few models that pull data from a legacy database that saves everything in Mountain Time and occasionally I have to have my Rails app write to that database. The problem is that every time it does, it converts the time I give it from Mountain Time to UTC, which is 6-7 hours ahead (depending on DST)! Needless to say, this really messes with reporting on that database. If I could get around doing this, I would. Unfortunately, I can't do anything about the fact that this other database uses a different timezone, nor can I really get away from the need for this app to save to that database occasionally. If I could just get Rails to stop trying to help me, it'd be great.

    Read the article

  • SQL Server 2008 database Backup

    - by TaraWalsh
    Hi guys, today I was trying to restore a database with a backup I had made previously on another computer, however I kept getting the following error message: the media loaded on "filepath" is formatted to support 2 media families, but 1 media families are expected according to the device specification I didn't look into it at the time, I just figured it was a bad backup and I'd redo it when I got home. So now I'm trying to do another backup, and I'm getting the above error message for that too. I did backup to a different location at one point, however that no longer exists now. Is there a way i can get passed this error and just do a fresh backup of the database? any pointers would be much appreciated :)

    Read the article

  • Visual Studio 2010 Database Project does not understand Schema Names anymore?

    - by Xenan
    I just tried to upgrade a Visual Studio 2008 Database project to VS2010 and actually it is quite a mess. Hundreds of warnings, all unsolved references. It seems to boil down to Visual Studio not to understand Schema Names (aka Ownership) anymore. For example, the standard dbo schema: [$(MyDataBase)].dbo.MyTable is fine but: [$(MyDataBase)].myschema.MyTable gives an unsolved reference. It did work in VS2008. Also the abbreviation for dbo, the double dot: [$(MyDataBase)]..MyTable Doesn't work anymore. In the project property windows I restored the references to the correct servers (which were lost after the conversion) but that didn't help. This seems pretty basic but I don't have a clue how to solve this. Any help is appreciated.

    Read the article

  • Searching for online database software/cms

    - by ButterdBread
    I am searching for a software or CMS that manages and displays large online databases, as some kind of frontend to MySQL or any other database. It should be accessible through the browser, be as secure as possible (offering login). The data I'd like to store would be personal information such as name, adress and birthday - also I'd need to be able to add custom fields as well. Also forms and the possibility to download the data in an excel? table would be great. PHPmyadmin is not an option, it should be similar to a CRM but more closely adapted to managing database tables, searching for entries and filtering data. It should be possible to have many user accounts with different rights, with each of them being able to acces certain parts of the data and entering own data. Is there something out there, that might get close to what I imagine? I appreciate any help!

    Read the article

  • Joomla! 2.5 use login data from another database

    - by user1756746
    I have a hard question. I'd like the joomla login does not use its own database for users/password but I want to use my database users with my table fields, my passwords etc.. I don't know from where start, I thought I could edit database request for login to my db or create a little script to automatically add the users on joomla database. I tried to see components/com_users/views/login/tmpl/default_login.php but it seems that there is nothing. Can someone help me figure out what to change? Maybe the simple thing is import my database users into database user joomla, is there any plugin or something else that you know? p.s. I use Clarion theme build on Gantry framework, Joomla! 2.5.6 Stable, PHP 5.2.17

    Read the article

  • Which database should I use for best performance

    - by _simon_
    Hello, I am working in Visual Studio 2005, .NET 2.0. I need to write an application, which listens on COM port and saves incoming data to a database. Main feature: save incoming data (series of 13-digits numbers), if this number allready exists, then mark it as double. For example, there could be these records in database: 0000000000001 OK 0000000000002 OK 0000000000002 Double 0000000000003 OK 0000000000004 OK I could use SQL database, but I don't know if it is fast enough... Database should be able to store up to 10.000.000 records and write up to 100 records per minute (so it needs to check 100 times per minute if this record allready exists). Which database should I use? Maybe the whole database would need to be in RAM. Where could I learn more about this? Thanks

    Read the article

  • Copying a database into a new database including structure and data

    - by Jason
    In phpMyAdmin under operations I can "Copy database to:" and select Structure and data CREATE DATABASE before copying Add AUTO_INCREMENT value I need to be able to do that without using phpMyAdmin. I know how to create the database and user. I have a source database that's a shell that I can work from so all I really need is the how to copy all the table structure and data part. (I know, the harder part) system() & exec() are not options for me which rules out mysqldump. (I think) How can I loop through each table and recreate it's structure and data? Is it just looping through the results of SHOW TABLES then for each table looping through DESCRIBE tablename Then, is there an easy way for getting the data copied?

    Read the article

  • One database or many?

    - by dsims
    I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases? Is there a best-practice? What are the positives/negatives for having a: database for the entire site (entity has a "customerID", data has "entityID") database for each customer (data has "entityID") database for each entity (relation of database to customer is outside of database) Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare.

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • Why do we need Audit Columns in Database Tables?

    - by Software Enthusiastic
    Hi I have seen many database designs having following audit columns on all the tables... Created By Create DateTime Updated By Upldated DateTime From one perspective I see tables from the following view... Entity Tables: Good candidate for Audit columns) Reference Tables: Audit columns may or may not required. In some case last update information is not at all required because record is never going to be modified.) Reference Data Tables Like Country Names, Entity State etc... Audit columns may not required because these information is created only during system installation time, and never going to be changed. I have seen many designers blindly put all audit columns to all tables, is this practice good, if yes what could be the reason... I just want to know because to me it seems illogical. It is difficult for me to figure out why do they design their db this way? I am not saying they are wrong or right, just want to know the WHY? You can also suggest me, if there is an alternative auditing patter or solution available... Thanks and Regards

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • How to filter a mysql database with user input on a website and then spit the filtered table back to the website? [migrated]

    - by Luke
    I've been researching this on google for literally 3 weeks, racking my brain and still not quite finding anything. I can't believe this is so elusive. (I'm a complete beginner so if my terminology sounds stupid then that's why.) I have a database in mysql/phpmyadmin on my web host. I'm trying to create a front end that will allow a user to specify criteria for querying the database in a way that they don't have to know sql, basically just combo boxes and checkboxes on a form. Then have this form 'submit' a query to the database, and show the filtered tables. This is how the SQL looks in Microsoft Access: PARAMETERS TEXTINPUT1 Text ( 255 ), NUMBERINPUT1 IEEEDouble; // pops up a list of parameters for the user to input SELECT DISTINCT Table1.Column1, Table1.Column2, Table1.Column3,* // selects only the unique rows in these three columns FROM Table1 // the table where this query is happening WHERE (((Table1.Column1) Like TEXTINPUT1] AND ((Table1.Column2)<=[NUMBERINPUT1] AND ((Table1.Column3)>=[NUMBERINPUT1])); // the criteria for the filter, it's comparing the user input parameters to the data in the rows and only selecting matches according to the equal sign, or greater than + equal sign, or less than + equal sign What I don't get: WHAT IN THE WORLD AM I SUPPOSED TO USE (that isn't totally hard)!? I've tried google fusion tables - doesn't filter right with numerical data or empty cells in rows, can't relate tables I've tried DataTables.net, can't filter right with numerical data and can't use SQL without a bunch of indepth knowledge, not even sure it can if you have that.. I've looked into using jQuery with google spreadsheets, doesn't work at all either I have no idea how I'm supposed to build a front end with my database. Every place that looks promising (like zohocreator) is asking for money, and is far too simplified to be able to do the LIKE criteria or SELECT DISTINCT stuff.

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >