Search Results

Search found 25521 results on 1021 pages for 'static objects'.

Page 269/1021 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • What is the fastest way to Initialize a multi-dimensional array to non-default values in .NET?

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few hundred-thousand to several million times during a "run". The array size will not change during a run and arrays are created one-at-a-time, used, then discarded, and a new array created. A "run" which may last from one minute (using 10x10 arrays) to forty-five minutes (100x100). The application creates arrays of int, bool, and struct. There can be multiple "runs" executing at same time, but are not because performance degrades terribly. I am using 100x100 as a base-line. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; } Down to 450 ticks with the following: public int[,] CreateArray1(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { array[x, y] = int.MaxValue; } } return array; } Down to approximately 165 ticks after a one-time initialization of 2800 ticks. (See my answer below.) If I can get stackalloc to work with multi-dimensional arrays, I should be able to get the same performance without having to intialize the private static array. private static bool _arrayInitialized5; private static int[,] _array5; public static int[,] CreateArray5(Size size) { if (!_arrayInitialized5) { int iX = size.Width; int iY = size.Height; _array5 = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { _array5[x, y] = int.MaxValue; } } _arrayInitialized5 = true; } return (int[,])_array5.Clone(); } Down to approximately 165 ticks without using the "clone technique" above. (See my answer below.) I am sure I can get the ticks lower, if I can just figure out the return of CreateArray9. public unsafe static int[,] CreateArray8(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; fixed (int* pfixed = array) { int count = array.Length; for (int* p = pfixed; count-- > 0; p++) *p = int.MaxValue; } return array; }

    Read the article

  • NullPointerException using EndlessAdapter with SimpleAdapter

    - by android_dev
    Hello, I am using EndlessAdapter from commonsguy with a SimpleAdapter. I can load data when I make a scroll down without problems, but I have a NullPointerException when I make a scroll up. The problem is in the method @Override public View getView(int position, View convertView,ViewGroup parent) { return wrapped.getView(position, convertView, parent); } from the class AdapterWrapper. The call to wrapped.getView(position, convertView,parent) raises the exception and I don´t know why. This is my implementation of EndlessAdapter: //Inner class in SearchTextActivity class DemoAdapter extends EndlessAdapter { private RotateAnimation rotate = null; DemoAdapter(ArrayList<HashMap<String, String>> result) { super(new SimpleAdapter(SearchTracksActivity.this, result, R.layout.textlist_item, PROJECTION_COLUMNS, VIEW_MAPPINGS)); rotate = new RotateAnimation( 0f, 360f, Animation.RELATIVE_TO_SELF, 0.5f, Animation.RELATIVE_TO_SELF, 0.5f); rotate.setDuration(600); rotate.setRepeatMode(Animation.RESTART); rotate.setRepeatCount(Animation.INFINITE); } @Override protected View getPendingView(ViewGroup parent) { View row=getLayoutInflater().inflate(R.layout.textlist_item, null); View child=row.findViewById(R.id.title); child.setVisibility(View.GONE); child=row.findViewById(R.id.username); child.setVisibility(View.GONE); child=row.findViewById(R.id.throbber); child.setVisibility(View.VISIBLE); child.startAnimation(rotate); return row; } @Override @SuppressWarnings("unchecked") protected void rebindPendingView(int position, View row) { HashMap<String, String> res = (HashMap<String, String>)getWrappedAdapter().getItem(position); View child=row.findViewById(R.id.title); ((TextView)child).setText(res.get("title")); child.setVisibility(View.VISIBLE); child=row.findViewById(R.id.username); ((TextView)child).setText(res.get("username")); child.setVisibility(View.VISIBLE); ImageView throbber=(ImageView)row.findViewById(R.id.throbber); throbber.setVisibility(View.GONE); throbber.clearAnimation(); } boolean mFinal = true; @Override protected boolean cacheInBackground() { EditText searchText = (EditText)findViewById(R.id.searchText); String textToSearch = searchText.getText().toString(); Util.getSc().searchText(textToSearch , offset, limit, new ResultListener<ArrayList<Text>>() { @Override public void onError(Exception e) { e.toString(); mFinal = false; } @Override public void onSuccess(ArrayList<Text> result) { if(result.size() == 0){ mFinal = false; }else{ texts.addAll(result); offset++; } } }); return mFinal; } @Override protected void appendCachedData() { for(Text text : texts){ result.add(text.getMapValues()); } texts.clear(); } } And I use it this way: public class SearchTextActivity extends AbstractListActivity { private static final String[] PROJECTION_COLUMNS = new String[] { TextStore.Text.TITLE, TextStore.Text.USER_NAME}; private static final int[] VIEW_MAPPINGS = new int[] { R.id.Text_title, R.id.Text_username}; ArrayList<HashMap<String, String>> result; static ArrayList<Text> texts; static int offset = 0; static int limit = 1; @Override void onAbstractCreate(Bundle savedInstance) { setContentView(R.layout.search_tracks); setupViews(); } private void setupViews() { ImageButton searchButton = (ImageButton)findViewById(R.id.searchButton); updateView(); } SimpleAdapter adapter; void updateView(){ if(result == null) { result = new ArrayList<HashMap<String, String>>(); } if(tracks == null) { texts = new ArrayList<Text>(); } } public void sendQuery(View v){ offset = 0; texts.clear(); result.clear(); setListAdapter(new DemoAdapter(result)); } } Does anybody knows what could be the problem? Thank you in advance

    Read the article

  • Dynamically setting the queryset of a ModelMultipleChoiceField to a custom recordset

    - by Daniel Quinn
    I've seen all the howtos about how you can set a ModelMultipleChoiceField to use a custom queryset and I've tried them and they work. However, they all use the same paradigm: the queryset is just a filtered list of the same objects. In my case, I'm trying to get the admin to draw a multiselect form that instead of using usernames as the text portion of the , I'd like to use the name field from my account class. Here's a breakdown of what I've got: # models.py class Account(models.Model): name = models.CharField(max_length=128,help_text="A display name that people understand") user = models.ForeignKey(User, unique=True) # Tied to the User class in settings.py class Organisation(models.Model): administrators = models.ManyToManyField(User) # admin.py from django.forms import ModelMultipleChoiceField from django.contrib.auth.models import User class OrganisationAdminForm(forms.ModelForm): def __init__(self, *args, **kwargs): from ethico.accounts.models import Account self.base_fields["administrators"] = ModelMultipleChoiceField( queryset=User.objects.all(), required=False ) super(OrganisationAdminForm, self).__init__(*args, **kwargs) class Meta: model = Organisation This works, however, I want queryset above to draw a selectbox with the Account.name property and the User.id property. This didn't work: queryset=Account.objects.all().order_by("name").values_list("user","name") It failed with this error: 'tuple' object has no attribute 'pk' I figured that this would be easy, but it's turned into hours of dead-ends. Anyone care to shed some light?

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • Hashing the state of a complex object in .NET

    - by Jan
    Some background information: I am working on a C#/WPF application, which basically is about creating, editing, saving and loading some data model. The data model contains of a hierarchy of various objects. There is a "root" object of class A, which has a list of objects of class B, which each has a list of objects of class C, etc. Around 30 classes involved in total. Now my problem is that I want to prompt the user with the usual "you have unsaved changes, save?" dialog, if he tries to exit the program. But how do I know if the data in current loaded model is actually changed? There is of course ways to solve this, like e.g. reloading the model from file and compare against the one in memory value by value or make every UI control set a flag indicating the model has been changed. Now instead, I want to create a hash value based on the model state on load and generate a new value when the user tries to exit, and compare those two. Now the question: So inspired of that, I was wondering if there exist some way to generate a hash value from the (value)state of some arbitrary complex object? Preferably in a generic way, e.g. no need to apply attributes to each involved class/field. One idea could be to use some of .NET's serialization functionality (assuming it will work out-of-the-box in this case) and apply a hash function to the content of the resulting file. However, I guess there exist some more suitable approach. Thanks in advance.

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • How do I make my ArrayList Thread-Safe? Another approach to problem in Java?

    - by thechiman
    I have an ArrayList that I want to use to hold RaceCar objects that extend the Thread class as soon as they are finished executing. A class, called Race, handles this ArrayList using a callback method that the RaceCar object calls when it is finished executing. The callback method, addFinisher(RaceCar finisher), adds the RaceCar object to the ArrayList. This is supposed to give the order in which the Threads finish executing. I know that ArrayList isn't synchronized and thus isn't thread-safe. I tried using the Collections.synchronizedCollection(c Collection) method by passing in a new ArrayList and assigning the returned Collection to an ArrayList. However, this gives me a compiler error: Race.java:41: incompatible types found : java.util.Collection required: java.util.ArrayList finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); Here is the relevant code: public class Race implements RaceListener { private Thread[] racers; private ArrayList finishingOrder; //Make an ArrayList to hold RaceCar objects to determine winners finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racers[i] = new RaceCar(laps, inputs[i]); //Add this as a RaceListener to each RaceCar ((RaceCar) racers[i]).addRaceListener(this); } //Implement the one method in the RaceListener interface public void addFinisher(RaceCar finisher) { finishingOrder.add(finisher); } What I need to know is, am I using a correct approach and if not, what should I use to make my code thread-safe? Thanks for the help!

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • What is happening in Crockford's object creation technique?

    - by Chris Noe
    There are only 3 lines of code, and yet I'm having trouble fully grasping this: Object.create = function (o) { function F() {} F.prototype = o; return new F(); }; newObject = Object.create(oldObject); (from Prototypal Inheritance) 1) Object.create() starts out by creating an empty function called F. I'm thinking that a function is a kind of object. Where is this F object being stored? Globally I guess. 2) Next our oldObject, passed in as o, becomes the prototype of function F. Function (i.e., object) F now "inherits" from our oldObject, in the sense that name resolution will route through it. Good, but I'm curious what the default prototype is for an object, Object? Is that also true for a function-object? 3) Finally, F is instantiated and returned, becoming our newObject. Is the "new" operation strictly necessary here? Doesn't F already provide what we need, or is there a critical difference between function-objects and non-function-objects? Clearly it won't be possible to have a constructor function using this technique. What happens the next time Object.create() is called? Is global function F overwritten? Surely it is not reused, because that would alter previously configured objects. And what happens if multiple threads call Object.create(), is there any sort of synchronization to prevent race conditions on F?

    Read the article

  • How do I relate two models/tables in Django based on non primary non unique keys?

    - by wizard
    I've got two tables that I need to relate on a single field po_num. The data is imported from another source so while I have a little bit of control over what the tables look like but I can't change them too much. What I want to do is relate these two models so I can look up one from the other based on the po_num fields. What I really need to do is join the two tables so I can do a where on a count of the related table. I would like to do filter for all Order objects that have 0 related EDI856 objects. I tried adding a foreign key to the Order model and specified the db_column and to_fields as po_num but django didn't like that the fact that Edi856.po_num wasn't unique. Here are the important fields of my current models that let me display but not filter for the data that I want. class Edi856(models.Model): po_num = models.CharField(max_length=90, db_index=True ) class Order(models.Model): po_num = models.CharField(max_length=90, db_index=True) def in_edi(self): '''Has the edi been processed?''' return Edi856.objects.filter(po_num = self.po_num).count() Thanks for taking the time to read about my problem. I'm not sure what to do from here.

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • Dependency Injection into your Singleton

    - by Langali
    I have a singleton that has a spring injected Dao (simplified below): public class MyService<T> implements Service<T> { private final Map<String, T> objects; private static MyService instance; MyDao myDao; public void set MyDao(MyDao myDao) { this. myDao = myDao; } private MyService() { this.objects = Collections.synchronizedMap(new HashMap<String, T>()); // start a background thread that runs for ever } public static synchronized MyService getInstance() { if(instance == null) { instance = new MyService(); } return instance; } public void doSomething() { myDao.persist(objects); } } My spring config will probably look like this: <bean id="service" class="MyService" factory-method="getInstance"/> But this will instantiate the MyService during startup. Is there a programmatic way to do a dependency injection of MyDao into MyService, but not have spring manage the MyService? Basically I want to be able to do this from my code: MyService.getInstance().doSomething(); while having spring inject the MyDao for me.

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Measuring time spent in application / thread

    - by Adamski
    I am writing a simulation in Java whereby objects act under Newtonian physics. An object may have a force applied to it and the resulting velocity causes it to move across the screen. The nature of the simulation means that objects move in discrete steps depending on the time ellapsed between the current and previous iteration of the animation loop; e.g public void animationLoop() { long prev = System.currentTimeMillis(); long now; while(true) { long now = System.currentTimeMillis(); long deltaMillis = now - prev; prev = now; if (deltaMillis > 0) { // Some time has passed for (Mass m : masses) { m.updatePosition(deltaMillis); } // Do all repaints. } } } A problem arises if the animation thread is delayed in some way causing a large amount of time to ellapse (the classic case being under Windows whereby clicking and holding on minimise / maximise prevents a repaint), which causes objects to move at an alarming rate. My question: Is there a way to determine the time spent in the animation thread rather than the wallclock time, or can anyone suggest a workaround to avoid this problem? My only thought so far is to contstrain deltaMillis by some upper bound.

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • How do I create a class repository in Java and do I really need it?

    - by Roman
    I have a large number of objects which are identified by names (strings). So, I would like to have a kind of mapping from object name to the class instances. I was told that in this situation I can use a "repository" class which works like that: Server myServer = ServerRepository.getServer("NameOfServer"); So, if there is already an object (sever) with the "NameOfServer" it will be returned by the "getServer". If such an object does not exist yet, it will be created and returned by the "getServer". So, my question is how to program such a "repository" class? In this class I have to be able to check if there is an instance of a given class such that it has a given value of a given field. How can I do it? I need to have a kind of loop over all existing object of a given class? Another part of my question is why I cannot use associative arrays (associative container, map, mapping, dictionary, finite map)? (I am not sure how do you call it in Java) In more details, I have an "array" which maps names of objects to objects. So, whenever I create a new object, I add a new element to the array: myArray["NameOfServer"] = new Server("NameOfServer").

    Read the article

  • Django 1.1 template question

    - by Bovril
    Hi All, I'm a little stuck trying to get my head around a django template. I have 2 objects, a cluster and a node I would like a simple page that lists... [Cluster 1] [associated node 1] [associated node 2] [associated node 3] [Cluster 2] [associated node 4] [associated node 5] [associated node 6] I've been using Django for about 2 days so if i've missed the point, please be gentle :) Models - class Node(models.Model): name = models.CharField(max_length=30) description = models.TextField() cluster = models.ForeignKey(Cluster) def __unicode__(self): return self.name class Cluster(models.Model): name = models.CharField(max_length=30) description = models.TextField() def __unicode__(self): return self.name Views - def DSAList(request): clusterlist = Cluster.objects.all() nodelist = Node.objects.all() t = loader.get_template('dsalist.html') v = Context({ 'CLUSTERLIST' : clusterlist, 'NODELIST' : nodelist, }) return HttpResponse(t.render(v)) Template - <body> <TABLE> {% for cluster in CLUSTERLIST %} <tr> <TD>{{ cluster.name }}</TD> {% for node in NODELIST %} {% if node.cluster.id == cluster.id %} <tr> <TD>{{ node.name }}</TD> </tr> {% endif %} {% endfor %} </tr> {% endfor %} </TABLE> </body> Any ideas ?

    Read the article

  • IOException during blocking network NIO in JDK 1.7

    - by Bass
    I'm just learning NIO, and here's the short example I've written to test how a blocking NIO can be interrupted: class TestBlockingNio { private static final boolean INTERRUPT_VIA_THREAD_INTERRUPT = true; /** * Prevent the socket from being GC'ed */ static Socket socket; private static SocketChannel connect(final int port) { while (true) { try { final SocketChannel channel = SocketChannel.open(new InetSocketAddress(port)); channel.configureBlocking(true); return channel; } catch (final IOException ioe) { try { Thread.sleep(1000); } catch (final InterruptedException ie) { } continue; } } } private static byte[] newBuffer(final int length) { final byte buffer[] = new byte[length]; for (int i = 0; i < length; i++) { buffer[i] = (byte) 'A'; } return buffer; } public static void main(final String args[]) throws IOException, InterruptedException { final int portNumber = 10000; new Thread("Reader") { public void run() { try { final ServerSocket serverSocket = new ServerSocket(portNumber); socket = serverSocket.accept(); /* * Fully ignore any input from the socket */ } catch (final IOException ioe) { ioe.printStackTrace(); } } }.start(); final SocketChannel channel = connect(portNumber); final Thread main = Thread.currentThread(); final Thread interruptor = new Thread("Inerruptor") { public void run() { System.out.println("Press Enter to interrupt I/O "); while (true) { try { System.in.read(); } catch (final IOException ioe) { ioe.printStackTrace(); } System.out.println("Interrupting..."); if (INTERRUPT_VIA_THREAD_INTERRUPT) { main.interrupt(); } else { try { channel.close(); } catch (final IOException ioe) { System.out.println(ioe.getMessage()); } } } } }; interruptor.setDaemon(true); interruptor.start(); final ByteBuffer buffer = ByteBuffer.allocate(32768); int i = 0; try { while (true) { buffer.clear(); buffer.put(newBuffer(buffer.capacity())); buffer.flip(); channel.write(buffer); System.out.print('X'); if (++i % 80 == 0) { System.out.println(); Thread.sleep(100); } } } catch (final ClosedByInterruptException cbie) { System.out.println("Closed via Thread.interrupt()"); } catch (final AsynchronousCloseException ace) { System.out.println("Closed via Channel.close()"); } } } In the above example, I'm writing to a SocketChannel, but noone is reading from the other side, so eventually the write operation hangs. This example works great when run by JDK-1.6, with the following output: Press Enter to interrupt I/O XXXX Interrupting... Closed via Thread.interrupt() — meaning that only 128k of data was written to the TCP socket's buffer. When run by JDK-1.7 (1.7.0_25-b15 and 1.7.0-u40-b37), however, the very same code bails out with an IOException: Press Enter to interrupt I/O XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXX Exception in thread "main" java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) at com.example.TestBlockingNio.main(TestBlockingNio.java:109) Can anyone explain this different behaviour?

    Read the article

  • Looking for a RESTful or SOAP pipeline between WordPress and InterWoven TeamSite

    - by deanpeters
    I've been Googling my brains out trying see if there's a simple way to bridge content to and from WordPress to and from TeamSite. I'm coming at this from the perspective of a WordPress developer. I see in the book "The Definitive Guide to Interwoven TeamSite" (http://bit.ly/d3z4wI) mention of objects for the Interwoven LiveSite product: com.interwoven.livesite.external.impl.RSS com.interwoven.livesite.external.impl.SOAP If I understand the above objects correctly, these allow me to instantiate objects of these data types, which after populating them via various method calls, allow me to render content using com.interwoven.livesite.external.ExternalCall ... but I'm not sure. Nor do I think this approach provides me the 2-way street I seek. As it stands now, from my limited understanding, it appears that the least path of resistance is deploying Interwoven's LiveSite with the existing TeamSite implementation so content can be both consumed and rendered via RSS ... an channel which WordPress can produce and consume; the latter with plugins such as wp-o-matic and/or feedpress. So the question is, does anyone out there have experience with a SOAP or RESTful API approach to InterWoven's TeamSite? If so, can I get some direction on documentation? Or is the addition of LiveSite + RSS the most feasible 2-way channel?

    Read the article

  • Would an immutable keyword in Java be a good idea?

    - by berry120
    Generally speaking, the more I use immutable objects in Java the more I'm thinking they're a great idea. They've got lots of advantages from automatically being thread-safe to not needing to worry about cloning or copy constructors. This has got me thinking, would an "immutable" keyword go amiss? Obviously there's the disadvantages with adding another reserved word to the language, and I doubt it will actually happen primarily for the above reason - but ignoring that I can't really see many disadvantages. At present great care has to be taken to make sure objects are immutable, and even then a dodgy javadoc comment claiming a component object is immutable when it's in fact not can wreck the whole thing. There's also the argument that even basic objects like string aren't truly immutable because they're easily vunerable to reflection attacks. If we had an immutable keyword the compiler could surely recursively check and give an iron clad guarantee that all instances of a class were immutable, something that can't presently be done. Especially with concurrency becoming more and more used, I personally think it'd be good to add a keyword to this effect. But are there any disadvantages or implementation details I'm missing that makes this a bad idea?

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • Trying to sentinel loop this program. [java]

    - by roger34
    Okay, I spent all this time making this for class but I have one thing that I can't quite get: I need this to sentinel loop continuously (exiting upon entering x) so that the System.out.println("What type of Employee? Enter 'o' for Office " + "Clerical, 'f' for Factory, or 's' for Saleperson. Enter 'x' to exit." ); line comes back up after they enter the first round of information. Also, I can't leave this up long on the (very) off chance a classmate might see this and steal the code. Full code following: import java.util.Scanner; public class Project1 { public static void main (String args[]){ Scanner inp = new Scanner( System.in ); double totalPay; System.out.println("What type of Employee? Enter 'o' for Office " + "Clerical, 'f' for Factory, or 's' for Saleperson. Enter 'x' to exit." ); String response= inp.nextLine(); while (!response.toLowerCase().equals("o")&&!response.toLowerCase().equals("f") &&!response.toLowerCase().equals("s")&&!response.toLowerCase().equals("x")){ System.out.print("\nInvalid selection,please enter your choice again:\n"); response=inp.nextLine(); } char choice = response.toLowerCase().charAt( 0 ); switch (choice){ case 'o': System.out.println("Enter your hourly rate:"); double officeRate=inp.nextDouble(); System.out.println("Enter the number of hours worked:"); double officeHours=inp.nextDouble(); totalPay = officeCalc(officeRate,officeHours); taxCalc(totalPay); break; case 'f': System.out.println("How many Widgets did you produce during the week?"); double widgets=inp.nextDouble(); totalPay=factoryCalc(widgets); taxCalc(totalPay); break; case 's': System.out.println("What were your total sales for the week?"); double totalSales=inp.nextDouble(); totalPay=salesCalc(totalSales); taxCalc(totalPay); break; } } public static double taxCalc(double totalPay){ double federal=totalPay*.22; double state =totalPay*.055; double netPay = totalPay - federal - state; federal =federal*Math.pow(10,2); federal =Math.round(federal); federal= federal/Math.pow(10,2); state =state*Math.pow(10,2); state =Math.round(state); state= state/Math.pow(10,2); totalPay =totalPay*Math.pow(10,2); totalPay =Math.round(totalPay); totalPay= totalPay/Math.pow(10,2); netPay =netPay*Math.pow(10,2); netPay =Math.round(netPay); netPay= netPay/Math.pow(10,2); System.out.printf("\nTotal Pay \t: %1$.2f.\n", totalPay); System.out.printf("State W/H \t: %1$.2f.\n", state); System.out.printf("Federal W/H : %1$.2f.\n", federal); System.out.printf("Net Pay \t: %1$.2f.\n", netPay); return totalPay; } public static double officeCalc(double officeRate,double officeHours){ double overtime=0; if (officeHours>=40) overtime = officeHours-40; else overtime = 0; if (officeHours >= 40) officeHours = 40; double otRate = officeRate * 1.5; double totalPay= (officeRate * officeHours) + (otRate*overtime); return totalPay; } public static double factoryCalc(double widgets){ double totalPay=widgets*.35 +300; return totalPay; } public static double salesCalc(double totalSales){ double totalPay = totalSales * .05 + 500; return totalPay; } }

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >