Search Results

Search found 40386 results on 1616 pages for 'object design'.

Page 367/1616 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • Modeling Tools that understand both Relational and LDAP

    - by jm04469
    I am looking to do some modeling and would like to have a tool that can capture not only a relational model like ERWIN but also allow us to easily port to LDAP as an option. NOTE: Visio can connect to an existing LDAP server and draw, but does not allow for you to model first and then deploy, unlike its relational capabilities.

    Read the article

  • How to perform a literal match with regex using wildcard

    - by kashif4u
    I am trying to perform literal match with regular expression using wildcard. string utterance = "Show me customer id 19"; string pattern 1 = "*tom*"; string patter 2 = "*customer id [0-9]*"; Desired results: if (Regex.IsMatch(utterance, pattern 1 )) { MATCH NOT FOUND } if (Regex.IsMatch(utterance, pattern 2 )) { MATCH FOUND } I have tried looking for literal match solution/syntax in wildcard but having difficulty. Could you also enlighten me with with an example on possible Pattern Matching Strength algorithm i.e. if code match 90 select? Note: I have table with 100000 records to perform literal matches from user utterances. Thanks in advance.

    Read the article

  • Android ignoring my setWidth() and setHeight()

    - by popoffka
    So, why does this code: package org.popoffka.apicross; import android.app.Activity; import android.os.Bundle; import android.widget.Button; public class Game extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Button testButton = new Button(this); testButton.setBackgroundResource(R.drawable.cell); testButton.setWidth(20); testButton.setHeight(20); setContentView(testButton); } } ...produce this thing: http://i42.tinypic.com/2hgdzme.png even though there's a setWidth(20) and setHeight(20) in the code? (R.drawable.cell is actually a 20x20 PNG image containing a white cell with a silver border)

    Read the article

  • Strategy Pattern with Type Reflection affecting Performances ?

    - by Aurélien Ribon
    Hello ! I am building graphs. A graph consists of nodes linked each other with links (indeed my dear). In order to assign a given behavior to each node, I implemented the strategy pattern. class Node { public BaseNodeBehavior Behavior {get; set;} } As a result, in many parts of the application, I am extensively using type reflection to know which behavior a node is. if (node.Behavior is NodeDataOutputBehavior) workOnOutputNode(node) .... My graph can get thousands of nodes. Is type reflection greatly affecting performances ? Should I use something else than the strategy pattern ? I'm using strategy because I need behavior inheritance. For example, basically, a behavior can be Data or Operator, a Data behavior can IO, Const or Intermediate and finally an IO behavior can be Input or Output. So if I use an enumeration, I wont be able to test for a node behavior to be of data kind, I will need to test it to be [Input, Output, Const or Intermediate]. And if later I want to add another behavior of Data kind, I'm screwed, every data-testing method will need to be changed.

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • Coding without Grock. Is it wrong?

    - by OldCurmudgeon
    Should a professional programmer allow themselves to write code without completely understanding the requirements? In my 30+ years as a programmer I must have written many thousands of lines of code without completely understanding what is required of me at the time. In most cases the results were rubbish that should have been discarded. Every other industry that employs professionals has systems to root out such behavior. Ours does not. Is this right?

    Read the article

  • Table with a lot of attributes

    - by Robert
    Hi, I'm planing to build some database project. One of the tables have a lot of attributes. My question is: What is better, to divide the the class into 2 separate tables or put all of them into one table. below is an example create table User { id, name, surname,... show_name, show_photos, ...) or create table User { id, name, surname,... ) create table UserPrivacy {usr_id, show_name, show_photos, ...) The performance i suppose is similar due to i can use index.

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • do's and don'ts for writing mysql queries

    - by nik
    One thing I always wonder while writing query is that am I writing most optimized query or not? I know certain things like: 1) using SELECT field1, filed2 instead of SELECT * 2) Giving proper indexes to the tables but I am sure there are more things that should be kept in mind for writing queries, since most of the database can only grow more and optimal query will help gr8 in execution time, Can u share some tips and tricks on writing queries?

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Is a "factory" method the right pattern?

    - by jdt141
    Hey all - So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor... /* * class1 and class 2 derive from the same superclass */ class Container { public: boost::shared_ptr<ComposedClass1> class1; boost::shared_ptr<ComposedClass2> class2; private: ... } /* * Constructor - builds the objects that we need in this container. */ Container::Container(some params) { class1.reset(new ComposedClass1(...)); class2.reset(new ComposedClass2(...)); } What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • Database Modeling - Either/Or in Many-to-Many

    - by EkoostikMartin
    I have an either/or type of situation in a many-to-many relationship I'm trying to model. So I have these tables: Message ---- *MessageID MessageText Employee ---- *EmployeeID EmployeeName Team ---- *TeamID TeamName MessageTarget ---- MessageID EmployeeID (nullable) TeamID (nullable) So, a Message can have either a list of Employees, or a list of Teams as a MessageTarget. Is the MessageTarget table I have above the best way to implement this relationship? What constraints can I place on the MessageTarget effectively? How should I create a primary key on MessageTarget table?

    Read the article

  • In symfony/doctrine's schema.yml, where should I put onDelete: CASCADE for a many-to-many relationsh

    - by nselikoff
    I have a many-to-many relationship defined in my Symfony (using doctrine) project between Orders and Upgrades (an Order can be associated with zero or more Upgrades, and an Upgrade can apply to zero or more Orders). # schema.yml Order: columns: order_id: {...} relations: Upgrades: class: Upgrade local: order_id foreign: upgrade_id refClass: OrderUpgrade Upgrade: columns: upgrade_id: {...} relations: Orders: class: Order local: upgrade_id foreign: order_id refClass: OrderUpgrade OrderUpgrade: columns: order_id: {...} upgrade_id: {...} I want to set up delete cascade behavior so that if I delete an Order or an Upgrade, all of the related OrderUpgrades are deleted. Where do I put onDelete: CASCADE? Usually I would put it at the end of the relations section, but that would seem to imply in this case that deleting Orders would cascade to delete Upgrades. Is Symfony + Doctrine smart enough to know what I'm wanting if I put onDelete: CASCADE in the above relations sections of schema.yml?

    Read the article

  • How extensible should code actually be?

    - by griegs
    I've just started a new job and one of the things my new boss talked to me about was code longevity. I've always coded to make my code infinently extensible and adaptable. I figured that if someone was going to change my code in the future then it should be easy to do. But I never really had a clear idea on how far into the future that should be. So my new boss told me not to bother coding for anything more that 3 years into the future and his reasoning was that technology changes, programs expire etc. At first I was kinda taken aback and thought he was a whack job but the longer I think about it the more I'm warming to the concept. Does anyone else have an opinion on how far into the future you should code to?

    Read the article

  • Has anyone ever encountered a Monad Transformer in the wild?

    - by martingw
    In my area of business - back office IT for a financial institution - it is very common for a software component to carry a global configuration around, to log it's progress, to have some kind of error handling / computation short circuit... Things that can be modelled nicely by Reader-, Writer-, Maybe-monads and the like in Haskell and composed together with monad transformers. But there seem to some drawbacks: The concept behind monad transformers is quite tricky and hard to understand, monad transformers lead to very complex type signatures, and they inflict some performance penalty. So I'm wondering: Are monad transformers best practice when dealing with those common tasks mentioned above?

    Read the article

  • Uses of the Apache commons-proxy library?

    - by Adrian
    I'm looking at the Apache commons-proxy library to implement some Proxy patterns in my current project. The Javadocs are all very well, but I'd really like to see some tutorials or just a project that uses the library so I can get more of a feel for it. Alas, searching for such things just tends to net you a lot of pages about setting up HTTP proxies using Apache. So I'm hoping that people here can help me.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >