Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 34/186 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How do I escape reserved words used as column names? MySQL/Create Table

    - by acidzombie24
    I am generating tables from classes in .NET and one problem is a class may have a field name key which is a reserved MySQL keyword. How do I escape it in a create table statement? (Note: The other problem below is text must be a fixed size to be indexed/unique) create table if not exists misc_info ( id INTEGER PRIMARY KEY AUTO_INCREMENT NOT NULL, key TEXT UNIQUE NOT NULL, value TEXT NOT NULL)ENGINE=INNODB;

    Read the article

  • How to define a current user?

    - by ie
    Is it possible to define a current user? I found a stored procedure 'sp_mgGetConnectedUsers'. It returns a result set with the only unique field 'Address'. How could I associate an executing query with such 'Address'. Please advice. Note: As far as I understand, another way to get the current user is to set a unique application Id for each connection, but I don't like this way much.

    Read the article

  • How do i escape reserve names in a column? MySQL/Create Table

    - by acidzombie24
    I am generating tables from classes in .NET and one problem is a class may have a field name key which is a reserved mysql keyword. How do i escape it in a create table statement? (Note: The other problem below is text must be a fixed size to be indexed/unique) create table if not exists misc_info ( id INTEGER PRIMARY KEY AUTO_INCREMENT NOT NULL, key TEXT UNIQUE NOT NULL, value TEXT NOT NULL)ENGINE=INNODB;

    Read the article

  • Extract IDs from CSS

    - by nosuchip
    I've the CSS file with many entry like id1, #id2, #id3, #id4 { ... } id3, #id2 { ... } id2, #id4 { ... } I want to extract list of unique IDs using command line tools (msys). Unique means any entry in list presented only once. How? PS: I know how doing it using python, but what about awk/sed/cat?

    Read the article

  • Removing [nid:n] in nodereference autocomplete

    - by AK
    Using the autocomplete field for a cck nodereference always displays the node id as a cryptic bracketed extension: Page Title [nid:23] I understand that this ensures that selections are unique in case nodes have the same title, but obviously this is a nasty thing to expose to the user. Has anyone had any success in removing these brackets, or adding a different unique identifier?

    Read the article

  • How to remove duplicate entries from a mysql db?

    - by Yegor
    I have a table with some ids + titles. I want to make the title column unique, but it has over 600k records already, some of which are duplicates (sometimes several dozen times over). How do I remove all duplicates, except one, so I can add a UNIQUE key to the title column after?

    Read the article

  • Zip only public directory

    - by Nino55
    Hi guys, I've a lot of websites (100+ directories) I want to create a unique zip with only public subdirectory. My structure now is like: - Site 1 --- app --- tmp --- log --- public - Site 2 --- app --- tmp --- log --- public - ... 100+ dirs ... Now I need a unique zip and then after unzip it I want to see this structure: - Site 1 --- public - Site 2 --- public - others Any suggestion how I can do that with linux commands zip/tar ? Thanks so much!

    Read the article

  • PHP make all possible variants of 4char A-Z,a-z,0-9

    - by Mike
    I have to make a list of all possible permurations of 4characters A-Z,a-z,0-9 and conbination of all this.How can i pass thru all of the possible combinations and printf them ? what's it for:I need to make this in a html document that i can then print and give all this as random unique usernames for our university, so that students can provide feedback based on one unique id that will be invalidated when used. i can not change this procedure into a better one!

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Understanding EDI 997.

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout: No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • How to properly test for constraint violation in hibernate?

    - by Cesar
    I'm trying to test Hibernate mappings, specifically a unique constraint. My POJO is mapped as follows: <property name="name" type="string" unique="true" not-null="true" /> What I want to do is to test that I can't persist two entities with the same name: @Test(expected=ConstraintViolationException.class) public void testPersistTwoExpertiseAreasWithTheSameNameIsNotAllowed(){ ExpertiseArea ea = new ExpertiseArea("Design"); ExpertiseArea otherEA = new ExpertiseArea("Design"); ead.setSession(getSessionFactory().getCurrentSession()); ead.getSession().beginTransaction(); ead.makePersistent(ea); ead.makePersistent(otherEA); ead.getSession().getTransaction().commit(); } On commiting the current transaction, I can see in the logs that a ConstraintViolationException is thrown: 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,572 WARN JDBCExceptionReporter:100 - SQL Error: -104, SQLState: 23505 16:08:47,572 ERROR JDBCExceptionReporter:101 - integrity constraint violation: unique constraint or index violation; SYS_CT_10036 table: EXPERTISEAREA 16:08:47,573 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update So I would expect the test to pass, since the expected ConstraintViolationException is thrown. However, the test never completes (neither pass nor fails) and I have to manually kill the test runner. What's the correct way to test this?

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • Paperclip and Amazon S3 Issue

    - by Jimmy
    Hey everyone, I have a rails app running on Heroku. I am using paperclip for some simple image uploads for user avatars and some other things, I have S3 set as my backend and everything seems to be working fine except when trying to push to S3 I get the following error: The AWS Access Key Id you provided does not exist in our records. Thinking I mis-pasted my access key and secret key, I tried again, still no luck. Thinking maybe it was just a buggy key I deactivated it and generated a new one. Still no luck. Now for both keys I have used the S3 browser app on OS X and have been able to connect to each and view my current buckets and add/delete buckets. Is there something I should be looking out for? I have my application's S3 and paperclip setup like so development: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] test: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] production: bucket: (unique_name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] has_attached_file :cover, :styles => { :thumb => "50x50" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":class/:id/:style/:filename" Note: I just added the (unique name) bits, those aren't actually there--I have also verified bucket names, but I don't even think this is getting that far. I also have my heroku environment vars setup correctly and have them setup on dev

    Read the article

  • Search in Stack

    - by WPS
    Hi, I've a Java Stack created and some custom objects added to it. These objects contains unique id as one of their field. I need to get the index of that object in stack based on the unique name. Please find the example. class TestVO{ private String name; private String uniqueId; //getters and setters } public class TestStack{ public static void main(String args[]){ TestVO vo1=new TestVO(); TestVO vo2=new TestVO(); TestVO vo3=new TestVO(); vo1.setName("Test Name 1") vo1.setId("123") vo2.setName("Test name 2"); vo2.setId("234"); Stack<TestVO> stack=new Stack<TestVO>(); stack.add(vo1); stack.add(vo2); //I need to get the index of a VO from stack using it's unique ID } } Can someone please help me to implement this?

    Read the article

  • Which itch does a gravatar scratch?

    - by WizardOfOdds
    This is a very serious question: I've seen lots of threads here about gravatars but I couldn't find and answer to this question: what computer identification/authentication (?) problem, if any, are gravatars supposed to solve? Neither the Wikipedia entry nor the official website are very useful. The official website mentions a "globally unique" picture. Unique in what sense? As far as I can see it's only the hash that is unique: two persons can have two pictures looking very similar if not identical. Note that this question is not about which problems do gravatars unarguably cause (like leaking 10% of the stackoverflow.com accounts email addresses like discussed here : "gravatars can leak email adresses" ) but about which authentication (?) problems, if any, are gravatars supposed to solve? Is the goal just to have a cool/funny/cute icon and save bandwith by having it stored on a remote website or is there more to it, like serving a real authentication purpose which I'd be completely missing? Note that I've got nothing against them and find them rather cool, but I'm just having a hard time figuring out what their purpose is and if I should care or not about them in the webapps I'm developping.

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Elegantly handling constraint violations in EJB/JPA environment?

    - by hallidave
    I'm working with EJB and JPA on a Glassfish v3 app server. I have an Entity class where I'm forcing one of the fields to be unique with a @Column annotation. @Entity public class MyEntity implements Serializable { private String uniqueName; public MyEntity() { } @Column(unique = true, nullable = false) public String getUniqueName() { return uniqueName; } public void setUniqueName(String uniqueName) { this.uniqueName = uniqueName; } } When I try to persist an object with this field set to a non-unique value I get an exception (as expected) when the transaction managed by the EJB container commits. I have two problems I'd like to solve: 1) The exception I get is the unhelpful "javax.ejb.EJBException: Transaction aborted". If I recursively call getCause() enough times, I eventually reach the more useful "java.sql.SQLIntegrityConstraintViolationException", but this exception is part of the EclipseLink implementation and I'm not really comfortable relying on it's existence. Is there a better way to get detailed error information with JPA? 2) The EJB container insists on logging this error even though I catch it and handle it. Is there a better way to handle this error which will stop Glassfish from cluttering up my logs with useless exception information? Thanks.

    Read the article

  • jQuery HOW TO?? pass additional parameters to success callback for $.ajax call ?

    - by dotnetgeek
    Hello jQuery Ninjas! I am trying, in vain it seems, to be able to pass additional parameters back to the success callback method that I have created for a successful ajax call. A little background. I have a page with a number of dynamically created textbox / selectbox pairs. Each pair having a dynamically assigned unique name such as name="unique-pair-1_txt-url" and name="unique-pair-1_selectBox" then the second pair has the same but the prefix is different. In an effort to reuse code, I have crafted the callback to take the data and a reference to the selectbox. However when the callback is fired the reference to the selectbox comes back as 'undefined'. I read here that it should be doable. I have even tried taking advantage of the 'context' option but still nothing. Here is the script block that I am trying to use: <script type="text/javascript" language="javascript"> $j = jQuery.noConflict(); function getImages(urlValue, selectBox) { $j.ajax({ type: "GET", url: $j(urlValue).val(), dataType: "jsonp", context: selectBox, success:function(data){ loadImagesInSelect(data, $j(this)) } , error:function (xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(thrownError); } }); } function loadImagesInSelect(data, selectBox) { //var select = $j('[name=single_input.<?cs var:op_unique_name ?>.selImageList]'); var select = selectBox; select.empty(); $j(data).each(function() { var theValue = $j(this)[0]["@value"]; var theId = $j(this)[0]["@name"]; select.append("<option value='" + theId + "'>" + theValue + "</option>"); }); select.children(":first").attr("selected", true); } From what I have read, I feel I am close but I just cant put my finger on the missing link. Please help in your typical ninja stealthy ways. TIA

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • DDD and MVC: Difference between 'Model' and 'Entity'

    - by Nathan Loding
    I'm seriously confused about the concept of the 'Model' in MVC. Most frameworks that exist today put the Model between the Controller and the database, and the Model almost acts like a database abstraction layer. The concept of 'Fat Model Skinny Controller' is lost as the Controller starts doing more and more logic. In DDD, there is also the concept of a Domain Entity, which has a unique identity to it. As I understand it, a user is a good example of an Entity (unique userid, for instance). The Entity has a life-cycle -- it's values can change throughout the course of the action -- and then it's saved or discarded. The Entity I describe above is what I thought Model was supposed to be in MVC? How off-base am I? To clutter things more, you throw in other patterns, such as the Repository pattern (maybe putting a Service in there). It's pretty clear how the Repository would interact with an Entity -- how does it with a Model? Controllers can have multiple Models, which makes it seem like a Model is less a "database table" than it is a unique Entity. So, in very rough terms, which is better? No "Model" really ... class MyController { public function index() { $repo = new PostRepository(); $posts = $repo->findAllByDateRange('within 30 days'); foreach($posts as $post) { echo $post->Author; } } } Or this, which has a Model as the DAO? class MyController { public function index() { $model = new PostModel(); // maybe this returns a PostRepository? $posts = $model->findAllByDateRange('within 30 days'); while($posts->getNext()) { echo $posts->Post->Author; } } } Both those examples didn't even do what I was describing above. I'm clearly lost. Any input?

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >