Search Results

Search found 1112 results on 45 pages for 'constraints'.

Page 11/45 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • CHECK/NOCHECK for Sql Compact Edition

    - by Ryan H
    I am attempting to wipe and repopulate test data on SQL CE. I am getting an error due to FK constraints existing. Typically in Sql2005 I would ALTER TABLE [tablename] CHECK/NOCHECK CONSTRAINT ALL to enable/disable all constraints. From what I could find in my searching, it seems that this might not be supported in CE. Is that true? If so, is there an alternative?

    Read the article

  • Travelling Salesman Problem Constraint Representation

    - by alex25
    Hey! I read a couple of articles and sample code about how to solve TSP with Genetic Algorithms and Ant Colony Optimization etc. But everything I found didn't include time (window) constraints, eg. "I have to be at customer x before 12am)" and assumed symmetry. Can somebody point me into the direction of some sample code or articles that explain how I can add constraints to TSP and how I can represent those in code. Thanks!

    Read the article

  • Why One-to-one relationship dosen't work?

    - by eugenn
    I'm trying to create a very simple relationship between two objects. Can anybody explain me why I can't find the Company object via findBy method? class Company { String name String desc City city static constraints = { city(unique: true) } } class City { String name static constraints = { } } class BootStrap { def init = { servletContext -> new City(name: 'Tokyo').save() new City(name: 'New York').save() new Company(name: 'company', city: City.findByName('New York')).save() def c = Company.findByName('company') // Why c=null????! } def destroy = { } }

    Read the article

  • Drop all foreign keys in a table

    - by trnTash
    I had this script which worked in sql server 2005 -- t-sql scriptlet to drop all constraints on a table DECLARE @database nvarchar(50) DECLARE @table nvarchar(50) set @database = 'dotnetnuke' set @table = 'tabs' DECLARE @sql nvarchar(255) WHILE EXISTS(select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table) BEGIN select @sql = 'ALTER TABLE ' + @table + ' DROP CONSTRAINT ' + CONSTRAINT_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table exec sp_executesql @sql END It does not work in SQL Server 2008. How can I easily drop all foreign key constraints for a certain table? Does anyone have a better script?

    Read the article

  • Grails many to many using a third 'join' class

    - by andy mccullough
    I read that a m:m relationship often means there is a third class that isn't yet required. So I have m:m on User and Project, and I created a third domain class, ProjectMembership The three domains are as follows (minimized for illustration purposes): User class User { String name static hasMany = [projectMemberships : ProjectMembership] } Project Membership class ProjectMembership { static constraints = { } static belongsTo = [user:User, project:Project] } Project: class Project { String name static hasMany = [projectMemberships : ProjectMembership] static constraints = { } } If I have the ID of the user, how can I get a list of Project objects that they are assigned to?

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Joining Tables Based on Foreign Keys

    - by maestrojed
    I have a table that has a lot of fields that are foreign keys referencing a related table. I am writing a script in PHP that will do the db queries. When I query this table for its data I need to know the values associated with these keys not the key. How do most people go about this? A 101 way to do this would be to query this table for its data including the foreign keys and then query the related tables to get each key's value. This could be a lot of queries (~10). Question 1: I think I could write 1 query with a bunch of joins. Would that be better? This approach also requires the querying script to know which table fields are foreign keys. Since I have many tables like this but all with different fields, this means writing nice generic functions is hard. MySQL InnoDB tables allow for foreign constraints. I know the database has these set up correctly. Question 2: What about the idea of querying the table and identifying what the constraints are and then matching them up using whatever process I decide on from Question 1. I like this idea but never see it being used in code. Makes me think its not a good idea for some reason. I would use something like SHOW CREATE TABLE tbl_name; to find what constraints/relationships exist for that table. Thank you for any suggestions or advice.

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • Accessing and inheriting Windows Message for other Windows Message in Delphi

    - by HX_unbanned
    I am using WMSysCommand messages to modify Caption bar button ( Maximize / Minimize ) behaivor and recent update requiered to use WMNCHitTest, but I do not want to split these two related messages in multiplie procedures because of lengthy code. Can I access private declaration ( message ) from other message? And if I can - How to do it? procedure TForm1.WMNCHitTest(var Msg: TWMNCHitTest) ; begin SendMessage(Handle, HTCAPTION, WM_NCHitTest, 0); // or other wParam or lParam ???? end; procedure TForm1.WMSysCommand; begin if (Msg.CmdType = SC_MAXIMIZE or 61488) or (Msg.Result = htCaption or 2) then // if command is Maximize or reciever message of Caption Bar click begin if CheckWin32Version(6, 0) then Constraints.MaxHeight := 507 else Constraints.MaxHeight := 499; Constraints.MaxWidth := 0; end else if (Msg.CmdType = SC_MINIMIZE or 61472) or (Msg.Result = htCaption or 2) then // if command is Minimize begin if (EnsureRange(Width, 252, 510) >= (510 / 2)) then PreviewOpn.Caption := '<' else PreviewOpn.Caption := '>'; end; DefaultHandler(Msg); // reset Message handler to default ( Application ) end; Soo ... do I think correctly and sipmly do not know correct commands or I am thinking total bullsh*t? Regards. Thanks for any help...

    Read the article

  • Java / Spring MVC 3 validation of an email address

    - by Tim
    I have a Java backend with Spring MVC and I am using validation in this way on my domain object for an email address: import javax.validation.constraints.NotNull; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size; ... @NotNull @Size(min = 1, max = 100) @Pattern(regexp="^([a-zA-Z0-9\\-\\.\\_]+)'+'(\\@)([a-zA-Z0-9\\-\\.]+)'+'(\\.)([a-zA-Z]{2,4})$") private String email; But all I get with these lines of code Set<ConstraintViolation<Person>> failures = validator.validate(personObject); ... Map<String, String> failureMessages = new HashMap<String, String>(); for (ConstraintViolation<Person> failure : failures) { failureMessages.put(failure.getPropertyPath().toString(), failure.getMessage()); System.out.println(failure.getPropertyPath().toString()+" - "+failure.getMessage()) } I get this on the console: email - must match "^([a-zA-Z0-9\\-\\.\\_]+)'+'(\\@)([a-zA-Z0-9\\-\\.]+)'+'(\\.)([a-zA-Z]{2,4})$" but I have as email address [email protected], so the regexp does not match. So I have two prolems: What's wrong here? And how can I define a error message on my own, because display this to the user, that is not a good thing :-) Thank you in advance for your help and Best Regards.

    Read the article

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • Dynamically sized UIWebView in a UITableViewCell with auto layout - constraint violation

    - by Orion Edwards
    I've got a UITableViewCell which contains a UIWebView. The table view cell adjusts it's height depending on the web view contents. I've got it all working fine, however when the view loads, I get a constraint violation exception in the debugger (the app continues running and functionally works fine, but I'd like to resolve this exception if possible). How I've got it set up: The TableView sets the cell height like this: -(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { if(indexPath.section == 0) { [_topCell layoutIfNeeded]; CGFloat finalHeight = [_topCell.contentView systemLayoutSizeFittingSize:UILayoutFittingCompressedSize].height; return finalHeight + 1; } The cell constraints are as follows: Arbitrary 7px offset from the cell's contentView (top) to the webView Web view has arbitrary fixed height constraint of 62px (will expand later once content loads) Arbitrary 8px offset from the webView to the cell's contentView (bottom) in my viewDidLoad, I tell the webView to go and load a URL, and in the webViewDidFinishLoad, I update the web view height constraint, like this -(void)webViewDidFinishLoad:(UIWebView *)webView { CGSize fittingSize = [webView sizeThatFits:CGSizeZero]; // fittingSize is approx 500 [self.tableView beginUpdates]; // Exceptions happen on the following line setting the constant _topCell.webViewHeightConstraint.constant = fittingSize.height; [_topCell layoutSubviews]; [self.tableView endUpdates]; } The exception looks like this: Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSLayoutConstraint:0x10964b250 V:[webView(62)] (Names: webView:0x109664a00 )>", "<NSLayoutConstraint:0x109243d30 V:|-(7)-[webView] (Names: webView:0x109664a00, cellContent:0x1092436f0, '|':cellContent:0x1092436f0 )>", "<NSLayoutConstraint:0x109243f80 V:[webView]-(8)-| (Names: cellContent:0x1092436f0, webView:0x109664a00, '|':cellContent:0x1092436f0 )>", "<NSAutoresizingMaskLayoutConstraint:0x10967c210 h=--& v=--& V:[cellContent(78)] (Names: cellContent:0x1092436f0 )>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0x10964b250 V:[webView(62)] (Names: webView:0x109664a00 )> This seems a bit weird. It's implied that the constraint which sets the height of the web view is going to be broken, however the web view does get it's height correctly set, and the tableview renders perfectly well. From my guesses, it looks like the newly increased web view height constraint (it's about 500px after the web view loads) is going to conflict with the <NSAutoresizingMaskLayoutConstraint:0x10967c210 h=--& v=--& V:[cellContent(78)] setting the cell height to 78 (put there by interface builder). This makes sense, however I don't want that cell content to have a fixed height of 78px, I want it to increase it's height, and functionally, it actually does this, just with these exceptions. I've tried setting _topCell.contentView.translatesAutoresizingMaskIntoConstraints = NO; to attempt to remove the NSAutoresizingMaskLayoutConstraint - this stops the exceptions, but then all the other layout is screwed up and the web view is about 10px high in the middle of the table view for no reason. I've also tried setting _topCell.contentView.autoresizingMask |= UIViewAutoresizingFlexibleHeight; in the viewDidLoad to hopefully affect the contentView 78px height constraint, but this has no effect Any help would be much appreciated

    Read the article

  • Scoring/analysis of Subjective testing for skills assessment

    - by ChrisBint
    I am lucky in the sense that I have been given the opportunity to be a 'Technical Troubleshooter' for our offshore development team. While I am confident and capable of dealing with most issues, I have come across something that I am not. Based on initial discussions with various team members both on and offshore, a requirement for a 'repeatable, consistent' skills assessment has been identified. In my opinion, the best way to achieve this would be a combination of objective and subjective tests. The former normally being an initial online skills assessment on various subjects, for example General C#, WCF and MVC. The latter being a technical test where the candidate would need to solve various problems and (hopefully) explain the thought processes involved with the solution whilst doing so. Obviously, the first method is consistent, repeatable and extremely accurate. The second is always going to be subjective and based on the approach, the solution (or possibly not) and other factors. The 'scoring' of this is also going to be down to the experience and skills of the assessor and this is where my problem lies; The person that is expected to be the assessor initially (me) has no experience. The people that will ultimately continue this process for other people will never remain the same due to project constraints and internal reasons, this changes the baseline for comparison. I am not aware of any suitable system that can be classed as consistent and repeatable for subjective tests with the 2 factors above, let alone if those did not exist. So anyway, I have to present a plan that will ultimately generate a skills/gap analysis and it is unlikely that I will be able to use an objective method (budget constraints most likely reason). The only option left is the subjective methods and the issues above. Does anyone have any suggestions for an approach that may tick all the boxes?

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • What is a technique for 2D ray-box intersection that is suitable for old console hardware?

    - by DJCouchyCouch
    I'm working on a Sega Genesis homebrew game (it has a 7mhz 68000 CPU). I'm looking for a way to find the intersection between a particle sprite and a background tile. Particles are represented as a point with a movement vector. Background tiles are 8 x 8 pixels, with an (X,Y) position that is always located at a multiple of 8. So, really, I need to find the intersection point for a ray-box collision; I need to find out where along the edge of the tile the ray/particle hits. I have these two hard constraints: I'm working with pixel locations (integers). Floating point is too expensive. It doesn't have to be super exact, just close enough. Multiplications, divisions, dot products, et cetera, are incredibly expensive and are to be avoided. So I'm looking for an efficient algorithm that would fit those constraints. Any ideas? I'm writing it in C, so that would work, but assembly should be good as well.

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • How to integrate a PHP CMS with paypal so that only users who completed a payment can register and authenticate?

    - by ibiza
    I am currently using a PHP CMS - cmsmadesimple - in order to create a website where services will be sold. I intend to use Paypal 'Buy Now' buttons in order to offer a few packages that will be renewable every 1-month or every 3-months and that grant access to the secure content of the website for a given period of time. Everything is going well so far but I am somewhat at loss for the user registration process as I have a few constraints I would like to use and it would be nice to automate the process if possible. Here are the constraints : User should be able to register to my website and choose a password himself Only users that paid should be able to register Access permissions should be disabled automatically after the service period if the package is not renewed And here is the process which I am thinking of : User clicks 'buy' on my website User is redirected on Paypal and completes the payment The paypal email used to pay should be returned to my server and somehow stored If it is a new email, user needs to register to my website (else if it is a returning customer, the deactivation flag for payment stopped should be removed to give back access) If a user does not renew his subscription, there should be a deactivation flag automatically set to the email used in order to lock access until next payment. Ideally, no human intervention is needed. What is the best way to implement all this? I am a bit at loss. I found this article that explained a few things and even has a nice code snippet, except that I'm not sure where to plug it. Thanks all

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • ASP.NET MVC 2 RC2 Routing - How to clear low-level values when using ActionLink to refer to a higher

    - by Gary McGill
    [NOTE: I'm using ASP.NET MVC2 RC2.] I have URLs like this: /customer/123/order/456/item/index /customer/123/order/456/item/789/edit My routing table lists the most-specific routes first, so I've got: // customer/123/order/456/item/789/edit routes.MapRoute( "item", // Route name "customer/{customerId}/order/{orderId}/item/{itemId}/{action}", // URL with parameters new { controller = "Items", action = "Details" }, // Parameter defaults new { customerId = @"\d+", orderId = @"\d+", itemId = @"\d+" } // Constraints ); // customer/123/order/456/item/index routes.MapRoute( "items", // Route name "customer/{customerId}/order/{orderId}/item/{action}", // URL with parameters new { controller = "Items", action = "Index" }, // Parameter defaults new { customerId = @"\d+", orderId = @"\d+" } // Constraints ); When I'm in the "Edit" page, I want a link back up to the "Index" page. So, I use: ActionLink("Back to Index", "index") However, because there's an ambient order ID, this results in the URL: /Customer/123/Order/456/Item/789/Index ...whereas I want it to "forget" the order ID and just use: /Customer/123/Order/456/Item/Index I've tried overriding the order ID like so: ActionLink("Back to Index", "index", new { orderId=string.empty }) ...but that doesn't work. How can I persuade ActionLink to "forget" the order ID?

    Read the article

  • Handling database failover for Rails applications on FreeBSD

    - by bianster
    I'm working on implementing database (Postgresql) failover for a Rails app that runs with Passenger/FreeBSD. Due to certain constraints regarding the server OS, it's necessary to continue using FreeBSD (as opposed to say, Ubuntu). I'm finding it to be quite a challenge to have failover handled within the Rails application, by way of a customised database adapter due to the fact that this application will be load-balanced between several webservers, and the multiple Rails processes that Passenger spawned in each webserver. I previously looked at setting up Pacemaker/Corosync to manage database server failover on a common IP but unfortunately I wasn't able to get past building the packages on FreeBSD. It does work rather well on Ubuntu 10.04 but I'm not likely to be able to use Ubuntu due to the OS constraints. I'm considering a custom witness daemon that simply pings the primary DB server, and this witness daemon switches all the webservers to the standby DB server when the primary becomes uncontactable (permanently/temporarily), to avoid split-brain. Though I would really like to know if there is a way to get Pacemaker(or something similar) to do the switch on FreeBSD.

    Read the article

  • How to merge two different Makefiles?

    - by martijnn2008
    I have did some reading on "Merging Makefiles", one suggest I should leave the two Makefiles separate in different folders [1]. For me this look counter intuitive, because I have the following situation: I have 3 source files (main.cpp flexibility.cpp constraints.cpp) one of them (flexibility.cpp) is making use of the COIN-OR Linear Programming library (Clp) When installing this library on my computer it makes sample Makefiles, which I have adjust the Makefile and it currently makes a good working binary. # Copyright (C) 2006 International Business Machines and others. # All Rights Reserved. # This file is distributed under the Eclipse Public License. # $Id: Makefile.in 726 2006-04-17 04:16:00Z andreasw $ ########################################################################## # You can modify this example makefile to fit for your own program. # # Usually, you only need to change the five CHANGEME entries below. # ########################################################################## # To compile other examples, either changed the following line, or # add the argument DRIVER=problem_name to make DRIVER = main # CHANGEME: This should be the name of your executable EXE = clp # CHANGEME: Here is the name of all object files corresponding to the source # code that you wrote in order to define the problem statement OBJS = $(DRIVER).o constraints.o flexibility.o # CHANGEME: Additional libraries ADDLIBS = # CHANGEME: Additional flags for compilation (e.g., include flags) ADDINCFLAGS = # CHANGEME: Directory to the sources for the (example) problem definition # files SRCDIR = . ########################################################################## # Usually, you don't have to change anything below. Note that if you # # change certain compiler options, you might have to recompile the # # COIN package. # ########################################################################## COIN_HAS_PKGCONFIG = TRUE COIN_CXX_IS_CL = #TRUE COIN_HAS_SAMPLE = TRUE COIN_HAS_NETLIB = #TRUE # C++ Compiler command CXX = g++ # C++ Compiler options CXXFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # additional C++ Compiler options for linking CXXLINKFLAGS = -Wl,--rpath -Wl,/home/martijn/Downloads/COIN/coin-Clp/lib # C Compiler command CC = gcc # C Compiler options CFLAGS = -O3 -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wsequence-point -Wreturn-type -Wcast-qual -Wall -Wno-unknown-pragmas -Wno-long-long -DCLP_BUILD # Sample data directory ifeq ($(COIN_HAS_SAMPLE), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" CFLAGS += -DSAMPLEDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatasample`\" else CXXFLAGS += -DSAMPLEDIR=\"\" CFLAGS += -DSAMPLEDIR=\"\" endif endif # Netlib data directory ifeq ($(COIN_HAS_NETLIB), TRUE) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) CXXFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" CFLAGS += -DNETLIBDIR=\"`PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --variable=datadir coindatanetlib`\" else CXXFLAGS += -DNETLIBDIR=\"\" CFLAGS += -DNETLIBDIR=\"\" endif endif # Include directories (we use the CYGPATH_W variables to allow compilation with Windows compilers) ifeq ($(COIN_HAS_PKGCONFIG), TRUE) INCL = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --cflags clp` else INCL = endif INCL += $(ADDINCFLAGS) # Linker flags ifeq ($(COIN_HAS_PKGCONFIG), TRUE) LIBS = `PKG_CONFIG_PATH=/home/martijn/Downloads/COIN/coin-Clp/lib64/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/lib/pkgconfig:/home/martijn/Downloads/COIN/coin-Clp/share/pkgconfig: pkg-config --libs clp` else ifeq ($(COIN_CXX_IS_CL), TRUE) LIBS = -link -libpath:`$(CYGPATH_W) /home/martijn/Downloads/COIN/coin-Clp/lib` libClp.lib else LIBS = -L/home/martijn/Downloads/COIN/coin-Clp/lib -lClp endif endif # The following is necessary under cygwin, if native compilers are used CYGPATH_W = echo # Here we list all possible generated objects or executables to delete them CLEANFILES = clp \ main.o \ flexibility.o \ constraints.o \ all: $(EXE) .SUFFIXES: .cpp .c .o .obj $(EXE): $(OBJS) bla=;\ for file in $(OBJS); do bla="$$bla `$(CYGPATH_W) $$file`"; done; \ $(CXX) $(CXXLINKFLAGS) $(CXXFLAGS) -o $@ $$bla $(LIBS) $(ADDLIBS) clean: rm -rf $(CLEANFILES) .cpp.o: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .cpp.obj: $(CXX) $(CXXFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` .c.o: $(CC) $(CFLAGS) $(INCL) -c -o $@ `test -f '$<' || echo '$(SRCDIR)/'`$< .c.obj: $(CC) $(CFLAGS) $(INCL) -c -o $@ `if test -f '$<'; then $(CYGPATH_W) '$<'; else $(CYGPATH_W) '$(SRCDIR)/$<'; fi` The other Makefile compiles a lot of code and makes use of bison and flex. This one is also made by someone else. I am able to alter this Makefile when I want to add some code. This Makefile also makes a binary. CFLAGS=-Wall LDLIBS=-LC:/GnuWin32/lib -lfl -lm LSOURCES=lex.l YSOURCES=grammar.ypp CSOURCES=debug.cpp esta_plus.cpp heap.cpp main.cpp stjn.cpp timing.cpp tmsp.cpp token.cpp chaining.cpp flexibility.cpp exceptions.cpp HSOURCES=$(CSOURCES:.cpp=.h) includes.h OBJECTS=$(LSOURCES:.l=.o) $(YSOURCES:.ypp=.tab.o) $(CSOURCES:.cpp=.o) all: solver solver: CFLAGS+=-g -O0 -DDEBUG solver: $(OBJECTS) main.o debug.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) solver.release: CFLAGS+=-O5 solver.release: $(OBJECTS) main.o g++ $(CFLAGS) -o $@ $^ $(LDLIBS) %.o: %.cpp g++ -c $(CFLAGS) -o $@ $< lex.cpp: lex.l grammar.tab.cpp grammar.tab.hpp flex -o$@ $< %.tab.cpp %.tab.hpp: %.ypp bison --verbose -d $< ifneq ($(LSOURCES),) $(LSOURCES:.l=.cpp): $(YSOURCES:.y=.tab.h) endif -include $(OBJECTS:.o=.d) clean: rm -f $(OBJECTS) $(OBJECTS:.o=.d) $(YSOURCES:.ypp=.tab.cpp) $(YSOURCES:.ypp=.tab.hpp) $(YSOURCES:.ypp=.output) $(LSOURCES:.l=.cpp) solver solver.release 2>/dev/null .PHONY: all clean debug release Both of these Makefiles are, for me, hard to understand. I don't know what they exactly do. What I want is to merge the two of them so I get only one binary. The code compiled in the second Makefile should be the result. I want to add flexibility.cpp and constraints.cpp to the second Makefile, but when I do. I get the problem following problem: flexibility.h:4:26: fatal error: ClpSimplex.hpp: No such file or directory #include "ClpSimplex.hpp" So the compiler can't find the Clp library. I also tried to copy-paste more code from the first Makefile into the second, but it still gives me that same error. Q: Can you please help me with merging the two makefiles or pointing out a more elegant way? Q: In this case is it indeed better to merge the two Makefiles? I also tried to use cmake, but I gave upon that one quickly, because I don't know much about flex and bison.

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >