Search Results

Search found 14326 results on 574 pages for 'design by contract'.

Page 207/574 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Patterns to implement this grammar into C# code

    - by MexicanHacker
    Hey guys, I'm creating this little BNF grammar and I wanted to <template>::= <types><editors> <types>::= <type>+ <type>::= <property>+ <property>::= <name><type> <editors>::= <editor>+ <editor>::= <name><type>(<textfield>|<form>|<list>|<pulldown>)+ <textfield>::= <label><property>[<editable>] <form>::= <label><property><editor> <list>::= <label><property><item-editor> <pulldown>::= <label><property><option>+ <option>::= <value> One possible solution we have in mind is to create POCO's that have annotations of the XMLSerialization namespace, like this, for example: [XMLRoot("template")] public class Template{ [XMLElement("types")] public Types types{ } } However I want to explore more solutions, what do you guys think?

    Read the article

  • Object Oriented Programming in AS3

    - by Jordan
    I'm building a game in as3 that has balls moving and bouncing off the walls. When the user clicks an explosion appears and any ball that hits that explosion explodes too. Any ball that then hits that explosion explodes and so on. My question is what would be the best class structure for the balls. I have a level system to control levels and such and I've already come up with working ways to code the balls. Here's what I've done. My first attempt was to create a class for Movement, Bounce, Explosion and finally Orb. These all extended each other in the order I just named them. I got it working but having Bounce extend Movement and Explosion extend Bounce, it just doesn't seem very object oriented because what if I wanted to add a box class that didn't move, but did explode? I would need a separate class for that explosion. My second attempt was to create Movement, Bounce and Explosion without extending anything. Instead I passed in a reference to the Orb class to each. Then the class stores that reference and does what it needs to do based on events that are dispatched by the Orb such as update, which was broadcast from Orb every enter frame. This would drive the movement and bounce and also the explosion when the time came. This attempt worked as well but it just doesn't seem right. I've also thought about using Interfaces but because they are more of an outline for classes, I feel like code reuse goes out the window as each class would need its own code for a specific task even if that task is exactly the same. I feel as if I'm searching for some form of multiple inheritance for classes that as3 does not support. Can someone explain to me a better way of doing what I'm attempting to do? Am I being to "Object Oriented" by having classed for Movement, Bounce, Explosion and Orb? Are Interfaces the way to go? Any feedback is appreciated!

    Read the article

  • In symfony/doctrine's schema.yml, where should I put onDelete: CASCADE for a many-to-many relationsh

    - by nselikoff
    I have a many-to-many relationship defined in my Symfony (using doctrine) project between Orders and Upgrades (an Order can be associated with zero or more Upgrades, and an Upgrade can apply to zero or more Orders). # schema.yml Order: columns: order_id: {...} relations: Upgrades: class: Upgrade local: order_id foreign: upgrade_id refClass: OrderUpgrade Upgrade: columns: upgrade_id: {...} relations: Orders: class: Order local: upgrade_id foreign: order_id refClass: OrderUpgrade OrderUpgrade: columns: order_id: {...} upgrade_id: {...} I want to set up delete cascade behavior so that if I delete an Order or an Upgrade, all of the related OrderUpgrades are deleted. Where do I put onDelete: CASCADE? Usually I would put it at the end of the relations section, but that would seem to imply in this case that deleting Orders would cascade to delete Upgrades. Is Symfony + Doctrine smart enough to know what I'm wanting if I put onDelete: CASCADE in the above relations sections of schema.yml?

    Read the article

  • What do you call functions which get and set?

    - by nickf
    The jQuery framework has a lot of functions which will either retrieve or mutate values depending on the parameters passed: $(this).html(); // get the html $(this).html('blah'); // set the html Is there a standard name for functions which behave like this?

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Uses of the Apache commons-proxy library?

    - by Adrian
    I'm looking at the Apache commons-proxy library to implement some Proxy patterns in my current project. The Javadocs are all very well, but I'd really like to see some tutorials or just a project that uses the library so I can get more of a feel for it. Alas, searching for such things just tends to net you a lot of pages about setting up HTTP proxies using Apache. So I'm hoping that people here can help me.

    Read the article

  • Coding without Grock. Is it wrong?

    - by OldCurmudgeon
    Should a professional programmer allow themselves to write code without completely understanding the requirements? In my 30+ years as a programmer I must have written many thousands of lines of code without completely understanding what is required of me at the time. In most cases the results were rubbish that should have been discarded. Every other industry that employs professionals has systems to root out such behavior. Ours does not. Is this right?

    Read the article

  • What should layers in dotnet application ? Pleas guide me

    - by haansi
    Hi, I am using layered architecture in dotnet (mostly I work on web projects). I am confuse what layers should I use ? I have small idea that there should be the following layers. user interface customer types (custom entities) business logic layer data access layer My purpose is sure quality of work and maximum re-usability of code. some one suggested to add common types layer in it. Please guide me what should be layers ? and in each layer what part should go ? thanks for your precious time and advice. haansi

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • How extensible should code actually be?

    - by griegs
    I've just started a new job and one of the things my new boss talked to me about was code longevity. I've always coded to make my code infinently extensible and adaptable. I figured that if someone was going to change my code in the future then it should be easy to do. But I never really had a clear idea on how far into the future that should be. So my new boss told me not to bother coding for anything more that 3 years into the future and his reasoning was that technology changes, programs expire etc. At first I was kinda taken aback and thought he was a whack job but the longer I think about it the more I'm warming to the concept. Does anyone else have an opinion on how far into the future you should code to?

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • Pattern/Matcher in Java?

    - by user1007059
    I have a certain text in Java, and I want to use pattern and matcher to extract something from it. This is my program: public String getItemsByType(String text, String start, String end) { String patternHolder; StringBuffer itemLines = new StringBuffer(); patternHolder = start + ".*" + end; Pattern pattern = Pattern.compile(patternHolder); Matcher matcher = pattern.matcher(text); while (matcher.find()) { itemLines.append(text.substring(matcher.start(), matcher.end()) + "\n"); } return itemLines.toString(); } This code works fully WHEN the searched text is on the same line, for instance: String text = "My name is John and I am 18 years Old"; getItemsByType(text, "My", "John"); immediately grabs the text "My name is John" out of the text. However, when my text looks like this: String text = "My name\nis John\nand I'm\n18 years\nold"; getItemsByType(text, "My", "John"); It doesn't grab anything, since "My" and "John" are on different lines. How do I solve this?

    Read the article

  • Use of (non) qualified names

    - by AProgrammer
    If I want to use the name baz defined in package foo|bar|quz, I've several choices: provide fbq as a short name for foo|bar|quz and use fbq|baz use foo|bar|quz|baz import baz from foo|bar|quz|baz and then use baz (or an alias given in the import process) import all public symbols from foo|bar|quz|baz and then use baz For the languages I know, my perception is that the best practice is to use the first two ways (I'll use one or the other depending on the specific package full name and the number of symbols I need from it). I'd use the third only in a language which doesn't provide the first and hunt for supporting tools to write the import statements. And in my opinion the fourth should be reserved to package designed with than import in mind, for instance if all exported symbols start with a prefix or contains the name of the package. My questions: what is in your opinion the best practice for your favorite languages? what would you suggest in a new language? what would you suggest in an old language adding such a feature?

    Read the article

  • Database Modeling - Either/Or in Many-to-Many

    - by EkoostikMartin
    I have an either/or type of situation in a many-to-many relationship I'm trying to model. So I have these tables: Message ---- *MessageID MessageText Employee ---- *EmployeeID EmployeeName Team ---- *TeamID TeamName MessageTarget ---- MessageID EmployeeID (nullable) TeamID (nullable) So, a Message can have either a list of Employees, or a list of Teams as a MessageTarget. Is the MessageTarget table I have above the best way to implement this relationship? What constraints can I place on the MessageTarget effectively? How should I create a primary key on MessageTarget table?

    Read the article

  • SEO with image link alt text vs standard text-based link

    - by Infiniti Fizz
    Hi, I'm currently developing a website and the main navigation is made up of image links because the font used for them isn't standard. My client's only worry is will this mess up search engine optimization? Can I just add alt text to the images like "link 1" or use the name attribute of the anchor tag? Or would it be better to just have the navigation as anchor tags with the names of the links in them like: <a href="...">link 1</a>? I'm new to SEO so really don't know which to suggest to him, Thanks for your time, InfinitiFizz

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • Java library class to handle scheduled execution of "callbacks"?

    - by Hanno Fietz
    My program has a component - dubbed the Scheduler - that lets other components register points in time at which they want to be called back. This should work much like the Unix cron service, i. e. you tell the Scheduler "notify me at ten minutes past every full hour". I realize there are no real callbacks in Java. Here's my approach, is there a library which already does this stuff? Feel free to suggest improvements, too. Register call to Scheduler passes: a time specification containing hour, minute, second, year month, dom, dow, where each item may be unspecified, meaning "execute it every hour / minute etc." (just like crontabs) an object containing data that will tell the calling object what to do when it is notified by the Scheduler. The Scheduler does not process this data, just stores it and passes it back upon notification. a reference to the calling object Upon startup, or after a new registration request, the Scheduler starts with a Calendar object of the current system time and checks if there are any entries in the database that match this point in time. If there are, they are executed and the process starts over. If there aren't, the time in the Calendar object is incremented by one second and the entreis are rechecked. This repeats until there is one entry or more that match(es). (Discrete Event Simulation) The Scheduler will then remember that timestamp, sleep and wake every second to check if it is already there. If it happens to wake up and the time has already passed, it starts over, likewise if the time has come and the jobs have been executed. Edit: Thanks for pointing me to Quartz. I'm looking for something much smaller, however.

    Read the article

  • Am I abusing Policies?

    - by pmr
    I find myself using policies a lot in my code and usually I'm very happy with that. But from time to time I find myself confronted with using that pattern in situations where the Policies are selected and runtime and I have developed habbits to work around such situations. Usually I start with something like that: class DrawArrays { protected: void sendDraw() const; }; class DrawElements { protected: void sendDraw() const; }; template<class Policy> class Vertices : public Policy { using Policy::sendDraw(); public: void render() const; }; When the policy is picked at runtime I have different choices of working around the situation. Different code paths: if(drawElements) { Vertices<DrawElements> vertices; } else { Vertices<DrawArrays> vertices; } Inheritance and virtual calls: class PureVertices { public: void render()=0; }; template<class Policy> class Vertices : public PureVertices, public Policy { //.. }; Both solutions feel wrong to me. The first creates an umaintainable mess and the second introduces the overhead of virtual calls that I tried to avoid by using policies in the first place. Am I missing the proper solutions or do I use the wrong pattern to solve the problem?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >