Search Results

Search found 34465 results on 1379 pages for 'database permissions'.

Page 52/1379 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Copying a database into a new database including structure and data

    - by Jason
    In phpMyAdmin under operations I can "Copy database to:" and select Structure and data CREATE DATABASE before copying Add AUTO_INCREMENT value I need to be able to do that without using phpMyAdmin. I know how to create the database and user. I have a source database that's a shell that I can work from so all I really need is the how to copy all the table structure and data part. (I know, the harder part) system() & exec() are not options for me which rules out mysqldump. (I think) How can I loop through each table and recreate it's structure and data? Is it just looping through the results of SHOW TABLES then for each table looping through DESCRIBE tablename Then, is there an easy way for getting the data copied?

    Read the article

  • One database or many?

    - by dsims
    I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases? Is there a best-practice? What are the positives/negatives for having a: database for the entire site (entity has a "customerID", data has "entityID") database for each customer (data has "entityID") database for each entity (relation of database to customer is outside of database) Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare.

    Read the article

  • How can i add Active Directory security groups to a SharePoint site to control permissions, rather than individual user accounts

    - by user574811
    SharePoint does integrate active directory accounts, of course, but how about security groups? Have a few sites where I'm fairly confident access is going through an existing Active Directory (AD) security groups (i.e. only an AD security group has been granted permissions through the 'People and Groups') In another situation, where I created the AD group and granted it permissions to a site, the customers were not able to access immediately. Eventually had to fast-track it and add the individuals to the People and Groups to keep the project going, but hoping not to have to maintain it that way. Any specific requirements of the security group in AD? Universal, Global, or domain local? Is there any time delay between modifying group members in AD and having that take effect in SharePoint?

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • apache2: Could not open configuration file /etc/apache2/apache2.conf: Permission denied

    - by AntonChanning
    I recently upgraded Ubuntu to the latest LTS edition on my work laptop, which I use as a LAMP development platform. The upgrade was from 12.4 to 14.4. Now I'm having trouble getting apache up and running again. Here is the output from an attempt: antonc@antonc-laptop:/etc/apache2$ sudo service apache2 restart * Restarting web server apache2 * The apache2 configtest failed. Output of config test was: apache2: Could not open configuration file /etc/apache2/apache2.conf: Permission denied Action 'configtest' failed. The Apache error log may have more information. Here is a list of permissions and ownership in /etc/apache, showing that apache2.conf is currently owned by root with permissions 644. I changed this temporarily to 777, but this made no difference, so I changed it back to 644. antonc@antonc-laptop:/etc/apache2$ ls -l total 80 -rw-r--r-- 1 root root 7115 Jan 7 2014 apache2.conf ... What do I need to do to get apache running again? Is the problem really with apache2.conf or some other setting? Should the conf file be owned by a user other than root?

    Read the article

  • SQL SERVER – Difference Between GRANT and WITH GRANT

    - by pinaldave
    This was very interesting question recently asked me to during my session at TechMela Nepal. The question is what is the difference between GRANT and WITH GRANT when giving permissions to user. Let us first see syntax for the same. GRANT: USE master; GRANT VIEW ANY DATABASE TO username; GO WITH GRANT: USE master; GRANT VIEW ANY DATABASE TO username WITH GRANT OPTION; GO The difference between both of this option is very simple. In case of only GRANT – username can not grant the same permission to other users. In case, of the option of WITH GRANT – username will be able to give the permission it has received to other users. This is very basic definition of the subject. I would like to request my readers to come up with working script to prove this scenario. If can submit your script to me by email (pinal ‘at’ sqlauthority.com) or in comment field. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Permissions

    Read the article

  • sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0

    - by 7UR7L3
    Whenever I try to do anything at all that requires my password it returns this: u7ur7l3@ubuntu:~$ sudo sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins u7ur7l3@ubuntu:~$ So I can't install anything from the Software Center / package manager or run any commands in terminal that require my password. I can log in, but that's pretty much it. I accidentally changed the permissions of some files, then changed some more trying to fix it :/. Now I'm completely lost as to what to do. This is what happened when I tried to get sudo working again using pkexec: u7ur7l3@ubuntu:~$ pkexec chown root /usr/lib/sudo/sudoers.so Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ExecFailed: Failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper: Success u7ur7l3@ubuntu:~$ sudo ls sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins And to change permissions I was using Root Actions as a dolphin service/ plugin thing, so history doesn't show me the permission changes. I just realized that sounds don't work at all anymore. When I go into Phonon my default settings and playback devices aren't even there. Also I don't have the option to shutdown, I can only log out or leave.

    Read the article

  • System boots in console + login loop

    - by Imagicien
    I messed up my system while trying to fix permission problems for setting up a LAMP local server. I tried this solution: How to avoid using sudo when working in /var/www? without success. Then I saw this solution: Permissions issue: how can Apache access files in my Home directory? I understood that I had to change the permissions of my root directory (among others), so I executed: sudo chmod 710 / I also changed the user group on / to www-data. I also did these operations on /home. I'm pretty sure one of those commands broke something, because it's the last commands I executed, and after that, my system showed strange/broken behavior: Firefox stopped showing pages Some icons got replaced by an red X icon (meaning "Icon not found" I guess) Applications refused to launch (no reaction) After rebooting: I got a strange graphical message talking about not being able to mount something, asking me if I wanted to wait, and talking about /tmp (I forgot the message since I was in shock) My system now boots in console, and when I login, it flashes unsignificant stuff* before re-asking me to login. My important data is on Ubuntu One. I'd prefer not having to reinstall from scratch. Is there a way to regain access to my system? Thanks a lot for your help. *Looks like a terminal header with the name of the OS and other info. Doesn't seem important.

    Read the article

  • Set default owner/user

    - by Daniel Hollands
    I'm a web developer, and so have set-up an old machine in the office as an Ubuntu Server, for the purposes of testing websites. I've set-up LAMP and have created a /var/www folder, from which all my local sites are served. The issue is that of user permissions, i.e. any files that I copy into that folder (from my Windows machine via the network) automatically take on me (daniel) as their owner. The problem is that I want www-data to become the owner. I did some research and saw that it should be possible to use setuid (and setgid) to automatically set www-data as the owner of all files put into /var/www automatically, so far I've not had any luck making it work. Can someone help please? Thank you UPDATE: Would this do what I want it to do? Default file permissions for php user www-data UPDATE 2: I've kinda fixed my issue by changing my samba settings. Using Webmin, I was able to go in and change the default settings (as seen here: http://imageshack.us/photo/my-images/521/captureon.png/)

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Using an ORM with a database that has no defined relationships?

    - by Ahmad
    Consider a database(MSSQL 2005) that consists of 100+ tables which have primary keys defined to a certain degree. There are 'relationships' between tables, however these are not enforced with foreign key constraints. Consider the following simplified example of typical types of tables I am dealing with. The are clear relations between the User and City and Province tables. However, they key issues is the inconsistent data types in the tables and naming conventions. User: UserRowId [int] PK Name [varchar(50)] CityId [smallint] ProvinceRowId [bigint] City: CityRowId [bigint] PK CityDescription [varchar(100)] Province: ProvinceId [int] PK ProvinceDesc [varchar(50)] I am considering a rewrite of the application (in ASP.net MVC) that uses this data source as is similar in design to MVC storefront. However I am going through a proof of concept phase and this is one of the stumbling blocks I have come across. What are my options in terms of ORM choice that can be easily used and why? Should I even be considering an ORM? (The reason I ask this is that most explanations and tutorials all work with relatively cleanly designed existing databases, or newly created ones when compared to mine. I am thus having a very hard time trying to find a way forward with this problem) There is a huge amount of existing SQL queries, would a datamappper(eg IBatis.net) be more suitable since we could easily modify them to work and reuse the investment already made? I have found this question on SO which indicates to me that an ORM can be used - however I get the impression that this a question of mapping? Note: at the moment, the object model is not clearly defined as it was non-existent. The existing system pretty much did almost everything in SQL or consisted of overly complicated, and numerous queries to complete fucntionality. I am pretty much a noob and have zero experience around ORMs and MVC - so this an awesome learning curve I am on.

    Read the article

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • Why do we need Audit Columns in Database Tables?

    - by Software Enthusiastic
    Hi I have seen many database designs having following audit columns on all the tables... Created By Create DateTime Updated By Upldated DateTime From one perspective I see tables from the following view... Entity Tables: Good candidate for Audit columns) Reference Tables: Audit columns may or may not required. In some case last update information is not at all required because record is never going to be modified.) Reference Data Tables Like Country Names, Entity State etc... Audit columns may not required because these information is created only during system installation time, and never going to be changed. I have seen many designers blindly put all audit columns to all tables, is this practice good, if yes what could be the reason... I just want to know because to me it seems illogical. It is difficult for me to figure out why do they design their db this way? I am not saying they are wrong or right, just want to know the WHY? You can also suggest me, if there is an alternative auditing patter or solution available... Thanks and Regards

    Read the article

  • Umbraco directory permissions | umbPermissions Script

    - by Vizioz Limited
    It has bugged me since I first used Umbraco that if I was doing a manual installation I had to set the directory permissionsI just downloaded a backup of one of my clients Umbraco sites and I was setting up a copy locally and of course I had to set the directory permissions, so I thought there must be a better way!I did a bit of Googling and had a look on the Umbraco forum but I could not find a script to perform this task, then I came across Set ACL on Source Forge and I set about writing my own little script.Save the following script as umbpermissions.bat and save it in the same directory as Set ACLecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service full rights to the appropriate directoriesREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermissions.bat C:\inetpub\umbracoroot@echo umbPermissions.bat - Script to set Umbraco File and Directory Permissions@echo Published by Chris Houston - 29th May 2009@echo http://blog.vizioz.comSetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\umbraco_client" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"Feel free to comment if I missed anything!

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • How to filter a mysql database with user input on a website and then spit the filtered table back to the website? [migrated]

    - by Luke
    I've been researching this on google for literally 3 weeks, racking my brain and still not quite finding anything. I can't believe this is so elusive. (I'm a complete beginner so if my terminology sounds stupid then that's why.) I have a database in mysql/phpmyadmin on my web host. I'm trying to create a front end that will allow a user to specify criteria for querying the database in a way that they don't have to know sql, basically just combo boxes and checkboxes on a form. Then have this form 'submit' a query to the database, and show the filtered tables. This is how the SQL looks in Microsoft Access: PARAMETERS TEXTINPUT1 Text ( 255 ), NUMBERINPUT1 IEEEDouble; // pops up a list of parameters for the user to input SELECT DISTINCT Table1.Column1, Table1.Column2, Table1.Column3,* // selects only the unique rows in these three columns FROM Table1 // the table where this query is happening WHERE (((Table1.Column1) Like TEXTINPUT1] AND ((Table1.Column2)<=[NUMBERINPUT1] AND ((Table1.Column3)>=[NUMBERINPUT1])); // the criteria for the filter, it's comparing the user input parameters to the data in the rows and only selecting matches according to the equal sign, or greater than + equal sign, or less than + equal sign What I don't get: WHAT IN THE WORLD AM I SUPPOSED TO USE (that isn't totally hard)!? I've tried google fusion tables - doesn't filter right with numerical data or empty cells in rows, can't relate tables I've tried DataTables.net, can't filter right with numerical data and can't use SQL without a bunch of indepth knowledge, not even sure it can if you have that.. I've looked into using jQuery with google spreadsheets, doesn't work at all either I have no idea how I'm supposed to build a front end with my database. Every place that looks promising (like zohocreator) is asking for money, and is far too simplified to be able to do the LIKE criteria or SELECT DISTINCT stuff.

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

  • What is the disadvantage of using abstract class as a database connectivity in zend framework 2 instead of service locator

    - by arslaan ejaz
    If I use database by creating adapter with drivers, initialize it in some abstract class and extend that abstract class to required model. Then use simple query statement. Like this: namespace My-Model\Model\DB; abstract class MysqliDB { protected $adapter; public function __construct(){ $this->adapter = new \Zend\Db\Adapter\Adapter(array( 'driver' => 'Mysqli', 'database' => 'my-database', 'username' => 'root', 'password' => '' )); } } And use abstract class of database like this in my models: class States extends DB\MysqliDB{ public function __construct(){ parent::__construct(); } protected $states = array(); public function select_all_states(){ $data = $this->adapter->query('select * from states'); foreach ($data->execute() as $row){ $this->states[] = $row; } return $this->states; } } I am new to zend framework, before i have experience of working in YII and Codeigniter. I like the object oriented in zend so i want to use it like this. And don't want to use it through service locater something like this: public function getServiceConfig(){ return array( 'factories' => array( 'addserver-mysqli' => new Model\MyAdapterFactory('addserver-mysqli'), 'loginDB' => function ($sm){ $adapter = $sm->get('addserver-mysqli'); return new LoginDB($adapter); } ) ); } In module. Am i Ok with this approach?

    Read the article

  • Android SQLite database locale, locking, and version

    - by gdoten
    In some books and online I see these method calls being made after a database is created: db.setLocale(Locale.getDefault()); db.setLockingEnabled(true); db.setVersion(DB_VERSION); Why is this done? As far as I can tell, after creating a new database, the system adds a table named android_metadata with one field named locale and that table has one row with the locale field set to "en_US". Now I assume the column has that value because I am using a U.S. phone, and if I were using a phone from a different region then the locale field would be set appropriately. Can anyone confirm this? I'm guessing that the setLocale method would only be useful in the case that you install a pre-built database onto a phone and then want to change the locale to match the phone's locale. Sound right? The documentation for setLockingEnabled says it defaults to true so there's no need to make that call, right? Lastly, what's with the call to setVersion? I can't find a table that contains this information so I've been assuming that the database file itself stores the version number somewhere. So when I create a database, which requires you to have already specified the version number in the call to the SQLiteOpenHelper constructor, there's no point in calling setVersion. Again, perhaps this method exists for the case of installing a pre-built database to a device and you then wish to change the database's version (though I can't think of when doing this would make sense). Thanks for any insight!

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >