Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 303/375 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • What OpenGL functions are not GPU accelerated?

    - by Xavier Ho
    I was shocked when I read this (from the OpenGL wiki): glTranslate, glRotate, glScale Are these hardware accelerated? No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. For a very, very long time I thought most of the OpenGL functions use the GPU to do computation. I'm not sure if this is a common misconception, but after a while of thinking, this makes sense. Old OpenGL functions (2.x and older) are really not suitable for real-world applications, due to too many state switches. This makes me realise that, possibly, many OpenGL functions do not use the GPU at all. So, the question is: Which OpenGL function calls don't use the GPU? I believe knowing the answer to the above question would help me become a better programmer with OpenGL. Please do share some of your insights.

    Read the article

  • PHP IIS problems downloading file says it is corrupt

    - by Matt
    Hi, I am running PHP on IIS 6 with mssql. I have uploaded a file to my webserver through a php script. Upon checking the file on the server the file is ok and not corrupt. However, when i then have a link on my website to try and download the file, it says the file is corrupt. I know the file isnt corrupt as i can view it perfectly if i look at the file on the server. Is seems like this is a common problem as a similar problem was posted here: http://www.bigresource.com/Tracker/Track-php-1pAakBhT/ Any help would be much appreciated. Thanks, M My download code is as follows: $filesize = $rows->filesize; $filepath = $rows->filepath; header("Content-Disposition: attachment; filename=$filename"); header("Content-length: $filesize"); header("Content-type: application/pdf"); header("Cache-control: must-revalidate"); header("Content-Description: PHP Generated Data"); readfile($filepath);

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Modules vs. Classes and their influence on descendants of ActiveRecord::Base

    - by Chris
    Here's a Ruby OO head scratcher for ya, brought about by this Rails scenario: class Product < ActiveRecord::Base has_many(:prices) # define private helper methods end module PrintProduct attr_accessor(:isbn) # override methods in ActiveRecord::Base end class Book < Product include PrintProduct end Product is the base class of all products. Books are kept in the products table via STI. The PrintProduct module brings some common behavior and state to descendants of Product. Book is used inside fields_for blocks in views. This works for me, but I found some odd behavior: After form submission, inside my controller, if I call a method on a book that is defined in PrintProduct, and that method calls a helper method defined in Product, which in turn calls the prices method defined by has_many, I'll get an error complaining that Book#prices is not found. Why is that? Book is a direct descendant of Product! More interesting is the following.. As I developed this hierarchy PrintProduct started to become more of an abstract ActiveRecord::Base, so I thought it prudent to redefine everything as such: class Product < ActiveRecord::Base end class PrintProduct < Product end class Book < PrintProduct end All method definitions, etc. are the same. In this case, however, my web form won't load because the attributes defined by attr_accessor (which are "virtual attributes" referenced by the form but not persisted in the DB) aren't found. I'll get an error saying that there is no method Book#isbn. Why is that?? I can't see a reason why the attr_accessor attributes are not found inside my form's fields_for block when PrintProduct is a class, but they are found when PrintProduct is a Module. Any insight would be appreciated. I'm dying to know why these errors are occurring!

    Read the article

  • As an Agile Java developer, what should I be looking for when hiring a C++ developer?

    - by agoudzwaard
    I come from an effective team of Agile Java developers. We've had a lot of success in hiring more people like ourselves - people passionate about technology with experience primarily in the Agile Java/J2EE space. We're looking to hire our first C++ developer to serve as an on-shore resource for maintaining and adding to the C++ portion of our code base. Up until now the entirety of our C++ development has been done out of an off-shore location. We consider our interview process to be fairly thorough: A phone screen centered on Object-Oriented Programming and Java A non-trivial at-home code project using Java An in-person interview covering technical and behavioral competency We look for a demonstration of Agile best practices (expressive code, test-driven development, continuous integration) throughout the entire process, however there is a common conception that Agility is primarily practiced by Java developers. If we retrofit our interview process for C++, should we still expect Agile qualities when interviewing for a C++ role? I'm asking on behalf of a team that has worked with Java too long to know what a good C++ developer looks like. Specifically we're looking to answer the following questions: Can we expect a demonstrated understanding of OO design and Separation of Concerns? In the code project we want the candidate to write unit tests. Would a good C++ developer be surprised by this expectation? Are there any "extra" competencies we can look for? For example with Java developers we always look for a familiarity with Dependency Injection.

    Read the article

  • How can I make one Maven module depend on another?

    - by Daniel Pryden
    OK, I thought I understood how to use Maven... I have a master project M which has sub-projects A, B, and C. C contains some common functionality (interfaces mainly) which is needed by A and B. I can run mvn compile jar:jar from the project root directory (the M directory) and get JAR files A.jar, B.jar, and C.jar. (The versions for all these artifacts are currently 2.0-SNAPSHOT.) The master pom.xml file in the M directory lists C under its <dependencyManagement> tag, so that A and B can reference C by just including a reference, like so: <dependency> <groupId>my.project</groupId> <artifactId>C</artifactId> </dependency> So far, so good. I can run mvn compile from the command line and everything works fine. But when I open the project in NetBeans, it complains with the problem: "Some dependency artifacts are not in the local repository", and it says the missing artifact is C. Likewise from the command line, if I change into the A or B directories and try to run mvn compile I get "Build Error: Failed to resolve artifact." I expect I could manually go to where my C.jar was built and run mvn install:install-file, but I'd rather find a solution that enables me to just work directly in NetBeans (and/or in Eclipse using m2eclipse). What am I doing wrong?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Is there a general-purpose printf-ish routine defined in any C standard

    - by supercat
    In many C libraries, there is a printf-style routine which is something like the following: int __vgprintf(void *info, (void)(*print_function(void*, char)), const char *format, va_list params); which will format the supplied string and call print_function with the passed-in info value and each character in sequence. A function like fprintf will pass __vgprintf the passed-in file parameter and a pointer to a function which will cast its void* to a FILE* and output the passed-in character to that file. A function like snprintf will create a struct holding a char* and length, and pass the address of that struct to a function which will output each character in sequence, space permitting. Is there any standard for such a function, which could be used if e.g. one wanted a function to output an arbitrary format to a TCP port? A common approach is to allocate a buffer one hopes is big enough, use snprintf to put the data there, and then output the data from the buffer. It would seem cleaner, though, if there were a standard way to to specify that the print formatter should call a user-supplied routine with each character.

    Read the article

  • .NET Web Serivce hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • Where does the delete control go in my Cocoa user interface?

    - by Graham Lee
    Hi, I have a Cocoa application managing a collection of objects. The collection is presented in an NSCollectionView, with a "new object" button nearby so users can add to the collection. Of course, I know that having a "delete object" button next to that button would be dangerous, because people might accidentally knock it when they mean to create something. I don't like having "are you sure you want to..." dialogues, so I dispensed with the "delete object". There's a menu item under Edit for removing an object, and you can hit Cmd-backspace to do the same. The app supports undoing delete actions. Now I'm getting support emails ranging from "does it have to be so hard to delete things" to "why can't I delete objects?". That suggests I've made it a bit too hard, so what's the happy middle ground? I see applications from Apple that do it my way, or with the add/remove buttons next to each other, but I hate that latter option. Is there another good (and preferably common) convention for delete controls? I thought about an action menu but I don't think I have any other actions that would go in it, rendering the menu a bit thin.

    Read the article

  • java - question about thread abortion and deadlock - volatile keyword

    - by Tiyoal
    Hello all, I am having some troubles to understand how I have to stop a running thread. I'll try to explain it by example. Assume the following class: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; interrupt(); } } Assume the thread is running and the following statement is evaluated to true: while (someObject.someCondition() == false && running) Then, another thread calls MyThread.halt(). Eventhough this function sets 'running' to false (which is a volatile boolean) and interrupts the thread, the following statement is still executed: someObject.wait(); We have a deadlock. The thread will never be halted. Then I came up with this, but I am not sure if it is correct: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; synchronized(someObject) { interrupt(); } } } Is this correct? Is this the most common way to do this? This seems like an obvious question, but I fail to come up with a solution. Thanks a lot for your help.

    Read the article

  • Display and Use Advanced Find for Product Subscriptions by Account in Microsoft CRM 4.0

    - by Chris
    By default when viewing an account in edit mode you have access to Opportunities, Invoices, and Quotes which contain the products being shopped by the account and/or the sales department. I'm trying to determine where to store, display, and use the products that an account has a subscription too. I may not understand the implementation but it seems that there should be "Products" option directly off the root Account management window that will show the user all the products the account has purchased. We are trying to integrate this with our production tracking system where product sales can originate from other channels that will not flow through CRM first. This product subscription does not fit into the Opportunity, Quote, or Invoice model because they are confirmed recurring sales that were automatically purchased via tools like a Public Website, Portal, etc. By enabling this tracking in CRM we can use the advanced find feature to facilitate follow up sales and marketing efforts. Example: Find everyone who is subscribed to model A, so we can notify them of a new holiday campaign where they can get 10% off on all add-ons. It's my assumption that this is a common scenario, however I'd like to better understand how to approach this within the world of Microsoft CRM. Thank you in advance.

    Read the article

  • C# "Rename" Property in Derived Class

    - by Eric
    When you read this you'll be awfully tempted to give advice like "this is a bad idea for the following reason..." Bear with me. I know there are other ways to approach this. This question should be considered trivia. Lets say you have a class "Transaction" that has properties common to all transactions such as Invoice, Purchase Order, and Sales Receipt. Let's take the simple example of Transaction "Amount", which is the most important monetary amount for a given transaction. public class Transaction { public double Amount { get; set; } public TxnTypeEnum TransactionType { get; set; } } This Amount may have a more specific name in a derived type... at least in the real world. For example, the following values are all actually the same thing: Transaction - Amount Invoice - Subtotal PurchaseOrder - Total Sales Receipt - Amount So now I want a derived class "Invoice" that has a Subtotal rather than the generically-named Amount. Ideally both of the following would be true: In an instance of Transaction, the Amount property would be visible. In an instance of Invoice, the Amount property would be hidden, but the Subtotal property would refer to it internally. Invoice looks like this: public class Invoice : Transaction { new private double? Amount { get { return base.Amount; } set { base.Amount = value; } } // This property should hide the generic property "Amount" on Transaction public double? SubTotal { get { return Amount; } set { Amount = value; } } public double RemainingBalance { get; set; } } But of course Transaction.Amount is still visible on any instance of Invoice. Thanks for taking a look!

    Read the article

  • How to structure this Symfony web project?

    - by James William
    I am new to Symfony and am not sure how to best structure my web project. The solution must accommodate 3 use cases: Public access to www.mydomain.com for general use Member only access to member.mydomain.com Administrator access to admin.mydomain.com All three virtual hosts point to the Symfony /web directory Questions: Is this 3 separate applications in my Symfony project (e.g. "frontend", "backend" and "admin" or "public", "member", "admin")? Is this a good approach if there is to be some duplicate code (e.g. generating a member list would be common across all 3 applications, but presented differently)? How would I route to the various applications based on the subdomain when a user accesses *.mydomain.com? Where in Symfony should this routing logic be placed? Or, is this one application with modules for each of the above use cases? EDIT: I do not have access to httpd.conf in apache to specify a default page for virtual hosts. I can only specify a directory for each subdomain using the hostin provider's cPanel.

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • javascript watching for variable change via a timer

    - by DA
    I have a slide show where every 10 seconds the next image loads. Once an image loads, it waits 10 seconds (setTimeout) and then calls a function which fades it out, then calls the function again to fade in the next image. It repeats indefinitely. This works. However, at times, I'm going to want to over-ride this "function1 call - pause - function2 call - function1 call" loop outside of this function (for instance, I may click on a slide # and want to jump directly to that slide and cancel the current pause). One solution seems to be to set up a variable, allow other events to change that variable and then in my original function chain, check for any changes before continuing the loop. I have this set up: var ping = 500; var pausetime = 10000; for(i=0; i<=pausetime; i=i+ping){ setTimeout(function(){ console.log(gocheckthevariable()); console.log(i); pseudoIf('the variable is true AND the timer is done then...do the next thing') },i) } The idea is that I'd like to pause for 10 seconds. During this pause, every half second I'll check to see if this should be stopped. The above will write out to the console the following every half second: true 10000 true 10000 true 10000 etc... But what I want/need is this: true 0 true 500 true 1000 etc... The basic problem is that the for loop executes before each of the set timeouts. While I can check the value of the variable every half second, I can't match that against the current loop index. I'm sure this is a common issue, but I'm at a loss what I should actually be searching for.

    Read the article

  • JPA - Real primary key generated ID for references

    - by Val
    I have ~10 classes, each of them, have composite key, consist of 2-4 values. 1 of the classes is a main one (let's call it "Center") and related to other as one-to-one or one-to-many. Thinking about correct way of describing this in JPA I think I need to describe all the primary keys using @Embedded / @PrimaryKey annotations. Question #1: My concern is - does it mean that on the database level I will have # of additional columns in each table referring to the "Center" equal to number of column in "Center" PK? If yes, is it possible to avoid it by using some artificial unique key for references? Could you please give an idea how real PK and the artificial one needs to be described in this case? Note: The reason why I would like to keep the real PK and not just use the unique id as PK is - my application have some data loading functionality from external data sources and sometimes they may return records which I already have in local database. If unique ID will be used as PK - for new records I won't be able to do data update, since the unique ID will not be available for just downloaded ones. At the same time it is normal case scenario for application and it just need to update of insert new records depends on if the real composite primary key matches. Question #2: All of the 10 classes have common field "date" which I described in an abstract class which each of them extends. The "date" itself is never a key, but it always a part of composite key for each class. Composite key is different for each class. To be able to use this field as a part of PK should I describe it in each class or is there any way to use it as is? I experimented with @Embedded and @PrimaryKey annotations and always got an error that eclipselink can't find field described in an abstract class. Thank you in advance! PS. I'm using latest version of eclipselink & H2 database.

    Read the article

  • Combining Java hashcodes into a "master" hashcode

    - by Nick Wiggill
    I have a vector class with hashCode() implemented. It wasn't written by me, but uses 2 prime numbers by which to multiply the 2 vector components before XORing them. Here it is: /*class Vector2f*/ ... public int hashCode() { return 997 * ((int)x) ^ 991 * ((int)y); //large primes! } ...As this is from an established Java library, I know that it works just fine. Then I have a Boundary class, which holds 2 vectors, "start" and "end" (representing the endpoints of a line). The values of these 2 vectors are what characterize the boundary. /*class Boundary*/ ... public int hashCode() { return 1013 * (start.hashCode()) ^ 1009 * (end.hashCode()); } Here I have attempted to create a good hashCode() for the unique 2-tuple of vectors (start & end) constituting this boundary. My question: Is this hashCode() implementation going to work? (Note that I have used 2 different prime numbers in the latter hashCode() implementation; I don't know if this is necessary but better to be safe than sorry when trying to avoid common factors, I guess -- since I presume this is why primes are popular for hashing functions.)

    Read the article

  • [Java] Safe way of exposing keySet().

    - by Jake
    This must be a fairly common occurrence where I have a map and wish to thread-safely expose its key set: public MyClass { Map<String,String> map = // ... public final Set<String> keys() { // returns key set } } Now, if my "map" is not thread-safe, this is not safe: public final Set<String> keys() { return map.keySet(); } And neither is: public final Set<String> keys() { return Collections.unmodifiableSet(map.keySet()); } So I need to create a copy, such as: public final Set<String> keys() { return new HashSet(map.keySet()); } However, this doesn't seem safe either because that constructor traverses the elements of the parameter and add()s them. So while this copying is going on, a ConcurrentModificationException can happen. So then: public final Set<String> keys() { synchronized(map) { return new HashSet(map.keySet()); } } seems like the solution. Does this look right?

    Read the article

  • There is a Default instance of form in VB.Net but not in C#, WHY?

    - by Shekhar_Pro
    I'm just curious to know that there is The (Name) property, which represents the name of the Form class.This property is used within the namespace to uniquely identify the class that the Form is an instance of and, in the case of Visual Basic, is used to access the default instance of the form. Now where this Default Instance come from, why can't C# have a equivalent method to this. Also for example to show a form in C# we do something like this: //Only method Form1 frm = new Form1(); frm.Show(); But in VB.Net we have both ways to do it: //'First common method (used slash because editor wouldn't format it properly) Form1.Show(); //'Second method Dim frm as New Form1(); frm.Show(); My question comes from this first method. What is this Form1, is it an instance of Form1 or the Form1 class itself. Now as i mentioned above the Form name is the Default instance in VB.Net. But we also know that Form1 is a class defined in Designer so how can the names be same for both the Instance and class name. If Form1 is Class then there is no (Static\Shared) method named Show(). So where does this method come from. And finally why C# can't have an equivalent of this. If there some mistake in my question Please let me know *I've checked this on stackoverflow, but couldn't find an answer to this.If you do find then please give a link to it.*

    Read the article

  • How to amend return value design in OO manner?

    - by FrontierPsycho
    Hello. I am no newb on OO programming, but I am faced with a puzzling situation. I have been given a program to work on and extend, but the previous developers didn't seem that comfortable with OO, it seems they either had a C background or an unclear understanding of OO. Now, I don't suggest I am a better developer, I just think that I can spot some common OO errors. The difficult task is how to amend them. In my case, I see a lot of this: if (ret == 1) { out.print("yadda yadda"); } else if (ret == 2) { out.print("yadda yadda"); } else if (ret == 3) { out.print("yadda yadda"); } else if (ret == 0) { out.print("yadda yadda"); } else if (ret == 5) { out.print("yadda yadda"); } else if (ret == 6) { out.print("yadda yadda"); } else if (ret == 7) { out.print("yadda yadda"); } ret is a value returned by a function, in which all Exceptions are swallowed, and in the catch blocks, the above values are returned explicitly. Oftentimes, the Exceptions are simply swallowed, with empty catch blocks. It's obvious that swalllowing exceptions is wrong OO design. My question concerns the use of return values. I believe that too is wrong, however I think that using Exceptions for control flow is equally wrong, and I can't think of anything to replace the above in a correct, OO manner. Your input, please?

    Read the article

  • Enforce link in Team foundation server bug work item for duplicates

    - by Tewr
    We have just started out with Team Foundation Server 2008 / Visual Studio Team System and we are pleased to find how we can export and modify work items to our needs. However, this last thing that would make the setup perfect for us has proved somewhat difficult: We have exported the Bug work item type and have made modifications to it to appear differently to different groups of users. We do, however, see a potential problem in non-developers reporting bugs which turn out to be duplicates. We would like to enforce that users who close a ticket with resolved reason:duplicate also creates a link to the bug which is perceived as the first bug report. I have looked at System.RelatedLinkCount, and put the rule <FIELD type="Integer" name="RelatedLinkCount" refname="System.RelatedLinkCount"> <WHEN field="Microsoft.VSTS.Common.ResolvedReason" value="duplicate"> <PROHIBITEDVALUES> <LISTITEM value="0" /> </PROHIBITEDVALUES> </WHEN> </FIELD> However, when I try to put anything in that scope, the importer tells me that System.RelatedLinkCount does not accept the rule, no matter what I put, but the rule above shows what I am trying to do (even though the most preferable rule would also check that the bug that I link to is not a duplicate as well, though this is overkill :P) Has anyone else tried to enforce rules like this in work items? Is there another approach to solving the same issue? I am thankful for any thoughts on the matter.

    Read the article

  • DefaultTableCellRenderer getTableCellRendererComponent never gets called

    - by Dean Schulze
    I need to render a java.util.Date in a JTable. I've implemented a custom renderer that extends DefaultTableCellRenderer (below). I've set it as the renderer for the column, but the method getTableCellRendererComponent() never gets called. This is a pretty common problem, but none of the solutions I've seen work. public class DateCellRenderer extends DefaultTableCellRenderer { String sdfStr = "yyyy-MM-dd HH:mm:ss.SSS"; SimpleDateFormat sdf = new SimpleDateFormat(sdfStr); public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column); if (value instanceof Date) { this.setText(sdf.format((Date) value)); } else logger.info("class: " + value.getClass().getCanonicalName()); return this; } } I've installed the custom renderer (and verified that it has been installed) like this: DateCellRenderer dcr = new DateCellRenderer(); table.getColumnModel().getColumn(2).setCellRenderer(dcr); I've also tried table.setDefaultRenderer( java.util.Date.class, dcr ); Any idea why the renderer gets installed, but never called?

    Read the article

  • return Queryable<T> or List<T> in a Repository<T>

    - by Danny Chen
    Currently I'm building an windows application using sqlite. In the data base there is a table say User, and in my code there is a Repository<User> and a UserManager. I think it's a very common design. In the repository there is a List method: //Repository<User> class public List<User> List(where, orderby, topN parameters and etc) { //query and return } This brings a problem, if I want to do something complex in UserManager.cs: //UserManager.cs public List<User> ListUsersWithBankAccounts() { var userRep = new UserRepository(); var bankRep = new BankAccountRepository(); var result = //do something complex, say "I want the users live in NY //and have at least two bank accounts in the system } You can see, returning List<User> brings performance issue, becuase the query is executed earlier than expected. Now I need to change it to something like a IQueryable<T>: //Repository<User> class public TableQuery<User> List(where, orderby, topN parameters and etc) { //query and return } TableQuery<T> is part of the sqlite driver, which is almost equals to IQueryable<T> in EF, which provides a query and won't execute it immediately. But now the problem is: in UserManager.cs, it doesn't know what is a TableQuery<T>, I need to add new reference and import namespaces like using SQLite.Query in the business layer project. It really brings bad code feeling. Why should my business layer know the details of the database? why should the business layer know what's SQLite? What's the correct design then?

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >