Search Results

Search found 34465 results on 1379 pages for 'database permissions'.

Page 120/1379 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Joining Tables Based on Foreign Keys

    - by maestrojed
    I have a table that has a lot of fields that are foreign keys referencing a related table. I am writing a script in PHP that will do the db queries. When I query this table for its data I need to know the values associated with these keys not the key. How do most people go about this? A 101 way to do this would be to query this table for its data including the foreign keys and then query the related tables to get each key's value. This could be a lot of queries (~10). Question 1: I think I could write 1 query with a bunch of joins. Would that be better? This approach also requires the querying script to know which table fields are foreign keys. Since I have many tables like this but all with different fields, this means writing nice generic functions is hard. MySQL InnoDB tables allow for foreign constraints. I know the database has these set up correctly. Question 2: What about the idea of querying the table and identifying what the constraints are and then matching them up using whatever process I decide on from Question 1. I like this idea but never see it being used in code. Makes me think its not a good idea for some reason. I would use something like SHOW CREATE TABLE tbl_name; to find what constraints/relationships exist for that table. Thank you for any suggestions or advice.

    Read the article

  • Help a CRUD programmer think about an "approval workflow"

    - by gerdemb
    I've been working on a web application that is basically a CRUD application (Create, Read, Update, Delete). Recently, I've started working on what I'm calling an "approval workflow". Basically, a request is generated for a material and then sent for approval to a manager. Depending on what is requested, different people need to approve the request or perhaps send it back to the requester for modification. The approvers need to keep track of what to approve what has been approved and the requesters need to see the status of their requests. As a "CRUD" developer, I'm having a hard-time wrapping my head around how to design this. What database tables should I have? How do I keep track of the state of the request? How should I notify users of actions that have happened to their requests? Is their a design pattern that could help me with this? Should I be drawing state-machines in my code? I think this is a generic programing question, but if it makes any difference I'm using Django with MySQL.

    Read the article

  • If we make a number every millisecond, how much data would we have in a day?

    - by Roger Travis
    I'm a bit confused here... I'm being offered to get into a project, where would be an array of certain sensors, that would give off reading every millisecond ( yes, 1000 reading in a second ). Reading would be a 3 or 4 digit number, for example like 818 or 1529. This reading need to be stored in a database on a server and accessed remotely. I never worked with such big amounts of data, what do you think, how much in terms of MBs reading from one sensor for a day would be?... 4(digits)x1000x60x60x24 ... = 345600000 bits ... right ? about 42 MB per day... doesn't seem too bad, right? therefor a DB of, say, 1 GB, would hold 23 days of info from 1 sensor, correct? I understand that MySQL & PHP probably would not be able to handle it... what would you suggest, maybe some aps? azure? oracle? ... Thansk!

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • guarantee child records either in one table or another, but not both?

    - by user151841
    I have a table with two child tables. For each record in the parent table, I want one and only one record in one of the child tables -- not one in each, not none. How to I define that? Here's the backstory. Feel free to criticize this implementation, but please answer the question above, because this isn't the only time I've encountered it: I have a database that holds data pertaining to user surveys. It was originally designed with one authentication method for starting a survey. Since then, requirements have changed, and now there are two different ways someone could sign on to start a survey. Originally I captured the authentication token in a column in the survey table. Since requirements changed, there are three other bits of data that I want to capture in authentication. So for each record in the survey table, I'm either going to have one token, or a set of three. All four of these are of different types, so my thought was, instead of having four columns where either one is going to be null, or three are going to be null ( or even worse, a bad mashup of either of those scenarios ), I would have two child tables, one for holding the single authentication token, the other for holding the three. Problem is, I don't know offhand how to define that in DDL. I'm using MySQL, so maybe there's a feature that MySQL doesn't implement that lets me do this.

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from DatabaseDatabase Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • How to mount an Oracle database to new instance?

    - by Vimvq1987
    I have an instance of Oracle 10g R2 installed on Windows Server 2003. This instance was running an database, which does not have any backup. Now the OS went down, and could not repaired, all I got is the running files of the old instance. How can I restore the database from these files to new instance? A step-by-step guide will be much appreciated because I'm new with Oracle. Thank you very much

    Read the article

  • Cannot Attach Database in SQL Express More Than Two Directories Deep?

    - by Dave Mackey
    I have a database in one of my Visual Studio Express projects. I want to attach it to my local SQLEXPRESS instance so I can run aspnet_regsql on it and add the membership database. When I select Attach Databases and then attempt to browse to the files (C:\Users\username\Documents\Visual Studio 2010\Projects\nameofproject) it only lets me navigate to C:\Users\username...Why? How can I fix this?

    Read the article

  • How do I convert a Mac OS Filemaker 2 database to a recent FM or Bento db, preserving the relations

    - by willc2
    I'm hoping for more than just exporting the data, I would like to preserve the relation between the databases. This is for a friend's legacy database that tracks monthly fees from a list of clients. I have the original FM database file on hand, but not the machine it ran on with the old version of Filemaker 2. Recent versions won't import it, saying it's too old. If there is a Mac-only solution that would make things simpler for me.

    Read the article

  • When and how often to start connection to database in php?

    - by AndHeiberg
    When and how often is it good practice to start the connection to your database in php? I'm new to databases, and I'm wondering when I should start by database connection. I'm creating a api with an index, controllers and model. Should I start the connection in the index and then pass it to all the other files, start the connection at the top of all files and call it as a global in functions as needed or start and end the connection in every function?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • What does the information_schema database represent?

    - by Mirage
    I have one database in mysql. But when i log into phpMyAdmin , it shows another database called information_schema. Is that database always present with one database? I mean to say is there a copy of information_schema for every database present in mysql or is there one database called inforemation_schema per mysql server? If i modify this information_schema database how will that affect my current database?

    Read the article

  • W2k8, Sybase Driver, Permissions

    - by Clustermagnet
    Trying to get a .net (32bit) app running on a Windows 2008 server. My experience in the Windows world is quite limited. Is this related to the Full/Medium trust settings? Have been Googling for quite some time. Appreciate your feedback! Seeing the following error: Required permissions cannot be acquired. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.Policy.PolicyException: Required permissions cannot be acquired. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [PolicyException: Required permissions cannot be acquired.] System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) +7606467 System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) +57 [FileLoadException: Could not load file or assembly 'Sybase.Data.AseClient, Version=1.155.1000.0, Culture=neutral, PublicKeyToken=26e0f1529304f4a7' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417)] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'Sybase.Data.AseClient, Version=1.155.1000.0, Culture=neutral, PublicKeyToken=26e0f1529304f4a7' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417)] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.WebDirectoryBatchCompiler..ctor(VirtualDirectory vdir) +163 System.Web.Compilation.BuildManager.BatchCompileWebDirectoryInternal(VirtualDirectory vdir, Boolean ignoreErrors) +53 System.Web.Compilation.BuildManager.BatchCompileWebDirectory(VirtualDirectory vdir, VirtualPath virtualDir, Boolean ignoreErrors) +175 System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) +86 System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +261 System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) +101 System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) +126 System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) +62 System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) +33 System.Web.UI.PageHandlerFactory.GetHandler(HttpContext context, String requestType, String virtualPath, String path) +37 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +307 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.4959; ASP.NET Version:2.0.50727.4955

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • Permissions problem on mounting afp drive

    - by Ron Gejman
    I am trying to mount a network drive via AFP on an Ubuntu 10.04 server machine. After installing AFP support, I use the following command: sudo mount_afp afp://USER:[email protected]/directory/ /media/dir This seems to work and it tells me that mounting succeeded. However, when I navigate to /media/dir I get the following error: cd: cfs: Input/output error Permissions in /media are: d????????? ? ? ? ? dir/ drwx------ 12 user 4.0K 2010-10-25 16:08 otherdisk/ So there is a permissions problem here. I eventually want to mount this drive automatically using fstab. What do I need to do to make the disk accessible?

    Read the article

  • Recursive reset file permissions on Windows

    - by Peter Horvath
    There is a big, complex directory structure on a relative big NTFS partition. Somebody managed to put very bad security privileges onto it - there are directories with randomly given/denied permissions, etc. I already run into permission bugs multiple times, and I found insecure permission settings multiple times (for example, write permissions for "Everyone", or false owners). I don't have time to check everything by hand (it is big). But luckily, my wishes are very simple. The most common: read/write/execute on anything for me, and maybe read for Everyone. Is it possible to somehow remove all security data from a directory and giving my (simple) wishes to overwrite everything there? On Unix, I used a chown -R ..., chmod -R ... command sequence. What is its equivalent on Windows?

    Read the article

  • Unable to edit CIFS Share permissions

    - by Datapimp23
    Hi, I have this backup Disk to disk device HP Storageworks 2540i. Managing the device is via a web interface. I joined the device into our AD domain in the CIFS server configuration. I then created a CIFS share called backupdata. If I try to access it I'm prompted for a login. The permissions tab in the web interface is empty. The following message is displayed. "CIFS Authentication is managed through Active Directory" However I do not find the share in AD. I forced replication between all DCs and I do not find it. Is there another way to edit the permissions?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >