Search Results

Search found 7693 results on 308 pages for 'jen fields'.

Page 59/308 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Entity framework separating entities for product and customer specific implementation

    - by Codecat
    I am designing an application with intention into making it a product line. I would like to extend the functionality across all layers and first struggle is with domain models. For example, core functionality would have entity named Invoice with few standard fields and then customer requirements will add some new fields to it, but I don't want to add to core Invoice class. For every customer I could use customer specific DbContext and injected correct context with dependency injection. Also every customer will get they own deployment public class Product.Domain.Invoice { public int InvoiceId { get; set; } // Other fields } How to approach this problem? Solution 1 does not work since Entity Framework does not allow same simple name classes. public class CustomerA.Domain.Invoice : Product.Domain.Invoice { public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } } Solution 2 Create separate table and link it to core domain table. Reusing services and controllers could be harder. public class CustomerA.Domain.CustomerAInvoice { public Product.Domain.Invoice Invoice { get; set; } public User ReviewedBy { get; set; } public DateTime? ReviewedOn { get; set; } }

    Read the article

  • A tip: Updating Data in SharePoint 2010 using REST API

    - by Sahil Malik
    SharePoint 2010 Training: more information Here is a little tip that will save you hours of head scratching. See there are two ways to update data in SharePoint using REST based API. A PUT request is used to update an entire entity. If no values are specified for fields in the entity, the fields will be set to default values. A MERGE request is used to update only those field values that have changed. Any fields that are not specified by the operation will remain set to their current value.   Now, sit back and think about it. You are going to update the entire entity! Hmm. Which means, you need to a) specify every column value, and b) ensure that the read only values match what was supplied to you. What a pain in the donkey! So 99/100 times, a PUT request will give you a HTTP 500 internal server error occurred, which is just so helpful. Read full article ....

    Read the article

  • Most common parts of a SELECT SQL query?

    - by jnrbsn
    I'm writing a function that generates a SELECT SQL query. (I'm not looking for a tool that already does this.) My function currently takes the following arguments which correspond to different parts of the SELECT query (the base table name is already known): where order fields joins group limit All of these arguments will be optional so that the function generates something like this by default: SELECT * FROM `table_name` I want to order the arguments so that the most often used parts of a SELECT query are first. That way the average call to the function will use as few of the arguments as possible rather than passing a null value or something like that to skip an argument. For example, if someone wanted to use the 1st and 3rd arguments but not the rest, they might have to pass a null value as the 2nd argument in order to skip it. So, for general purpose use, how should I order the arguments? Edit: To be more precise, out of the query parts I listed above, what is the order from most used to least used? Also, I'm not looking for solutions that allow me to not have to specify the order. Edit #2: The "fields" argument will default to "*" (i.e all fields/columns).

    Read the article

  • Best Practice: What can be the hashCode() method implementation if custom field used in equals() method are null?

    - by goodspeed
    What is the best practice to return a value for hashCode() method if custom field used in equals are null ? I have a situation, where equals() override is implemented using custom fields. Usually it it is better to override hashCode() also using that custom fields used in equals(). But if all the custom fields used in equals() are null, then what would be the best implementation for hashCode()? Example: class Person { private String firstName; private String lastName; public String getFirstName() { return firstName; } public String getLastName() { return lastName; } @Override public boolean equals(Object object) { boolean result = false; if (object == null || object.getClass() != getClass()) { result = false; } else { Person person = (Person) object; if (this.firstName == person.getFirstName() && this.lastName == tiger.getLastName()) { result = true; } } return result; } @Override public int hashCode() { int hash = 3; if(this.firstName == null || this.lastName == null) { // <b>What is the best practice here, </b> // <b>is return super.hashCode() better ?</b> } hash = 7 * hash + this.firstName.hashCode(); hash = 7 * hash + this.lastName.hashCode(); return hash; } } is it required to check for null in hashCode() ? If yes, what should be returned if custom values are null ?

    Read the article

  • How to convert List to Datatable in vb.net

    - by Samir R. Bhogayta
    Public Function ConvertToDataTable(Of T)(ByVal list As IList(Of T)) As DataTable        Dim table As New DataTable()        Dim fields() As FieldInfo = GetType(T).GetFields()        For Each field As FieldInfo In fields            table.Columns.Add(field.Name, field.FieldType)        Next        For Each item As T In list            Dim row As DataRow = table.NewRow()            For Each field As FieldInfo In fields                row(field.Name) = field.GetValue(item)            Next            table.Rows.Add(row)        Next        Return table    End Function

    Read the article

  • Form development optimization

    - by Juan
    Like many web developers I do forms all the time. I found myself doing the same all the time: placing input fields, assigning a name to each, ajax the form, then create the PHP which involves to assign a PHP var to each $_REQUEST['var'], escape and validate data, build the html and emailing the results. So I found that 70% of the work is duplicated but I just can't duplicate a page and change the fields. I end up wasting more time reformatting, deleting and adding different fields than creating from scratch. I started planing to program a "list of IDs to html+php" converter in which I'd input all the IDs and this would output the basic html and php. Then I thought: there's got to be thousands of developers that go through this, I'd be reinventing the wheel. So this is my question, I'm trying to find that wheel that somebody must have invented already. I found this: http://www.trirand.com/blog/jqform/ which does more or less what I'm looking for but it's an expensive solution and it has too much functionality for what I'd be using it. Which tools do you use to optimize repetitive task about HTML and PHP?

    Read the article

  • Linked List is now Patented?

    - by John Isaiah Carmona
    Linked list Ming-Jen Wang Patent number: 7028023 Filing date: Sep 26, 2002 Issue date: Apr 11, 2006 Application number: 10/260,471 A computerized list is provided with auxiliary pointers for traversing the list in different sequences. One or more auxiliary pointers enable a fast, sequential traversal of the list with a minimum of computational time. Such lists may be used in any application where lists may be reordered for various purposes. Does this mean that I need to acquire permission before using a linked list in my codes? What about the codes I write from my previous apps that uses a linked list? What about the framework that implements the linked list?

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • SQL Saturday #310 - Dublin, Ireland

    SQL Saturday is coming to Dublin on September 20, 2014. Come for a free day of SQL Server training and networking. This year's conference features a mix of levels, topics, and speakers like Buck Woody (Big Data), Jen Stirrup (PowerBI), Denny Cherry (Storage), Red Gate's Tom Austin (Continuous integration), and more. Register while space is available. Need to compare and sync database schemas?Let SQL Compare do the hard work. ”With the productivity I'll get out of this tool, it's like buying time.” Robert Sondles. Download a free trial.

    Read the article

  • Django save_m2m() and excluded field

    - by jul
    hi, in a ModelForm I replaced a field by excluding it and adding a new one with the same name, as shown below in AddRestaurantForm. When saving the form with the code shown below, I get an error in form.save_m2m() ("Truncated incorrect DOUBLE value"), which seems to be due to the function to attempt to save the tag field, while it is excluded. Is the save_m2m() function supposed to save excluded fields? Is there anything wrong in my code? Thanks Jul (...) new_restaurant = form.save(commit=False) new_restaurant.city = city new_restaurant.save() tags = form.cleaned_data['tag'] if(tags!=''): tags=tags.split(',') for t in tags: tag, created = Tag.objects.get_or_create(name = t.strip()) tag.save() new_restaurant.tag.add(tag) new_restaurant.save() form.save_m2m() models.py class Tag(models.Model): name = models.CharField(max_length=100, unique=True) class Restaurant(models.Model): name = models.CharField(max_length=50) city=models.ForeignKey(City) category=models.ManyToManyField(Category) tag=models.ManyToManyField(Tag, blank=True, null=True) forms.py class AddRestaurantForm(ModelForm): name = forms.CharField(widget=forms.TextInput(attrs=classtext)) city = forms.CharField(widget=forms.TextInput(attrs=classtext), max_length=100) tag = forms.CharField(widget=forms.TextInput(attrs=classtext), required=False) class Meta: model = Restaurant exclude = ('city','tag') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 498. form.save_m2m() File "/var/lib/python-support/python2.5/django/forms/models.py" in save_m2m 75. f.save_form_data(instance, cleaned_data[f.name]) File "/var/lib/python-support/python2.5/django/db/models/fields/ related.py" in save_form_data 967. setattr(instance, self.attname, data) File "/var/lib/python-support/python2.5/django/db/models/fields/ related.py" in set 627. manager.add(*value) File "/var/lib/python-support/python2.5/django/db/models/fields/ related.py" in add 430. self._add_items(self.source_col_name, self.target_col_name, *objs) File "/var/lib/python-support/python2.5/django/db/models/fields/ related.py" in _add_items 497. [self._pk_val] + list(new_ids)) File "/var/lib/python-support/python2.5/django/db/backends/util.py" in execute 19. return self.cursor.execute(sql, params) File "/var/lib/python-support/python2.5/django/db/backends/mysql/ base.py" in execute 84. return self.cursor.execute(query, args) File "/var/lib/python-support/python2.5/MySQLdb/cursors.py" in execute 168. if not self._defer_warnings: self._warning_check() File "/var/lib/python-support/python2.5/MySQLdb/cursors.py" in _warning_check 82. warn(w[-1], self.Warning, 3) File "/usr/lib/python2.5/warnings.py" in warn 62. globals) File "/usr/lib/python2.5/warnings.py" in warn_explicit 102. raise message Exception Type: Warning at /restaurant/add/ Exception Value: Truncated incorrect DOUBLE value: 'a'

    Read the article

  • Getting field of type bytea in helper table when using GenerationType.IDENTITY

    - by dtrunk
    I'm creating my db scheme using Hibernate. There's a Table called "tbl_articles" and another one called "tbl_categories". To have a n-n relationship a helper table ("tbl_articles_categories") is needed. Here are all necessary Entities: @Entity @Table( name = "tbl_articles" ) public class Article implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields... public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_categories" ) public class Category implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_articles_categories" ) @AssociationOverrides({ @AssociationOverride( name = "pk.article", joinColumns = @JoinColumn( name = "article_id" ) ), @AssociationOverride( name = "pk.category", joinColumns = @JoinColumn( name = "category_id" ) ) }) public class ArticleCategory { private ArticleCategoryPK pk = new ArticleCategoryPK(); public void setPk( ArticleCategoryPK pk ) { this.pk = pk; } @EmbeddedId public ArticleCategoryPK getPk() { return pk; } @Transient public Article getArticle() { return pk.getArticle(); } public void setArticle( Article article ) { pk.setArticle( article ); } @Transient public Category getCategory() { return pk.getCategory(); } public void setCategory( Category category ) { pk.setCategory( category ); } } @Embeddable public class ArticleCategoryPK implements Serializable { private static final long serialVersionUID = 1L; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_article_id" ) private Article article; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_category_id" ) private Category category; public ArticleCategoryPK( Article article, Category category ) { setArticle( article ); setCategory( category ); } public ArticleCategoryPK() { } public Article getArticle() { return article; } public void setArticle( Article article ) { this.article = article; } public Category getCategory() { return category; } public void setCategory( Category category ) { this.category = category; } } Now, I'm getting a serial type what I wanted in my articles table as well as in my categories table. But looking into my helper table, there aren't the expected fields article_id and category_id each of type integer - instead there are article and category of type bytea. What's wrong here? EDIT: Sorry, forgot to mention that I'm using PostgreSQL.

    Read the article

  • 24h dans la vie d'Internet, une infographie réalisée par de jeunes chercheurs et designers dévoile des chiffres surprenants

    24h dans la vie d'Internet Une infographie réalisée par de jeunes chercheurs et designers dévoile des chiffres surprenants Jen Rhee et l'équipe du site MBA Online (le site d'auto-formation et de conseil qu'il a créé) ont réalisé une infographie pour le moins surprenante. Ils se sont intéressés à ce qui se passe sur Internet en l'espace d'une simple journée. Et les chiffres révélés sont à la fois étonnants et amusants. On y apprend par exemple qu'en une journée, les flux d'informations échangés sur Internet pourraient remplir 168 millions de DVD. Que près de 294 milliards d'e-mails sont envoyés (l'équivalent de deux années de traitement de courrier aux Etats-Unis). Ou que 2 millions d'articles de ...

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Squid - Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Is this spaghetti code already? [migrated]

    - by hephestos
    I post the following code writen all by hand. Why I have the feeling that it is a western spaghetti on its own. Second, could that be written better? <div id="form-board" class="notice" style="height: 200px; min-height: 109px; width: auto;display: none;"> <script type="text/javascript"> jQuery(document).ready(function(){ $(".form-button-slide").click(function(){ $( "#form-board" ).dialog(); return false; }); }); </script> <?php echo $this->Form->create('mysubmit'); echo $this->Form->input('inputs', array('type' => 'select', 'id' => 'inputs', 'options' => $inputs)); echo $this->Form->input('Fields', array('type' => 'select', 'id' => 'fields', 'empty' => '-- Pick a state first --')); echo $this->Form->input('inputs2', array('type' => 'select', 'id' => 'inputs2', 'options' => $inputs2)); echo $this->Form->input('Fields2', array('type' => 'select', 'id' => 'fields2', 'empty' => '-- Pick a state first --')); echo $this->Form->end("Submit"); ?> </div> <div style="width:100%"></div> <div class="form-button-slide" style="float:left;display:block;"> <?php echo $this->Html->link("Error Results", "#"); ?> </div> <script type="text/javascript"> jQuery(document).ready(function(){ $("#mysubmitIndexForm").submit(function() { // we want to store the values from the form input box, then send via ajax below jQuery.post("Staffs/view", { data1: $("#inputs").attr('value'), data2:$("#inputs2").attr('value'),data3:$("#fields").attr('value'), data4:$("#fields2").attr('value') } ); //Close the dialog $( "#form-board" ).dialog('close') return false; }); $("#inputs").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); $("#inputs2").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs2').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields2"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); }); </script>

    Read the article

  • Why doesn't Data Driven Subscription in SSRS 2005 like my Stored Procedure?

    - by bert
    I'm trying to define a Data Driven Subscription for a report in SSRS 2005. In Step 3 of the set up you're asked for: " a command or query that returns a list of recipients and optionally returns fields used to vary delivery settings and report parameter values for each recipient" This I have written and it returns the data without a hitch. I press next and it rolls onto the next screen in the set up which has all the variables to set for the DDS and in each case it has an option to "Select Value From Database" I select this radio button and press the drop down. No fields are available to me. Now the only way I could vary the number of parameters returned by the SP was to have the SP write the SQL to an nvarchar variable and then at the end execute the variable as sql. I have tested this in the Management Studio and it returns the expected fields. I even named them after the fields in SSRS but the thing won't put the field names into the dropdowns. I've even taken the query body out of the Stored Proc, verified it in SSRS and then tried that. It doesn't work either. Can anyone shed any light into what I'm doing wrong?

    Read the article

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >