Search Results

Search found 17940 results on 718 pages for 'algorithm design'.

Page 68/718 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • Is there a way to put relationships/contraints into CSS?

    - by hekevintran
    In every design tool or art principle I've heard of, relationships are a central theme. By relationships I mean the thing you can do in Adobe Illustrator to specify that the height of one shape is equal to half the height of another. You cannot express this information in CSS. CSS hard-codes all values. Using a language like LESS that allows variables and arithmetic you can get closer to relationships but it's still a CSS variant. This inability in my mind is the biggest problem with CSS. CSS is supposed to be a language that describes the visual component of a Web page but it ignores relationships and contraints, ideas that are at the core of art. How possible is it to imagine a new Web design language that can express relationships and contraints that can be implemented in JavaScript using the current CSS properties?

    Read the article

  • how to avoid returning mocks from a mocked object list

    - by koen
    I'm trying out mock/responsibility driven design. I seem to have problems to avoid returning mocks from mocks in the case of finder objects. An example could be an object that checks whether the bills from last month are paid. It needs a service that retrieves a list of bills for that. So I need to mock that service that retrieves the bills. At the same time I need that mock to return mocked Bills (since I don't want my test to rely on the correctness bill implementation). Is my design flawed? Is there a better way to test this? Or is this the way it will need to be when using finder objects (the finding of the bills in this case)?

    Read the article

  • Why implement DB connection pointer object as a reference counting pointer? (C++)

    - by DVK
    At our company one of the core C++ classes (Database connection pointer) is implemented as a reference counting pointer. To be clear, the objects are NOT DB connections themselves, but pointers to a DB connection object. The library is very old, and nobody who designed is around anymore. So far, nether I, nor any C++ experts in the company that I asked have come up with a good reason for why this particular design was chosen. Any ideas? It is introducing some problems (partially due to awful reference pointer implementation used), and I'm trying to understand if this design actually has some deep underlying reasons? The usage pattern these days seems to be that the DB connection pointer object is returned by a DB connection manager class, and it's somewhat unclear whether DB connection pointers were designed to be able to be used independently of DB connection manager.

    Read the article

  • Plan variable and call dependencies

    - by Gerenuk
    I'd like to write down the design of my program to understand the dependencies and calls better. I know there are class diagrams which show inheritance and attribute variables. However I'd also like to document the input parameters to method functions and in particular which calls the methods function executes inside (e.g. on the input parameters). Also sometimes it might be useful to show how actual objects are connected (if there is a standard structure). This way I can have a better understanding of the modules and design before starting to program. Can you suggest a method to do this software design? It should be one-to-one to programming code structure so that I really notice all quirks beforehand (instead of high-level design where thing are hard to implement without further work). Maybe some special diagram or tool or a combination? It is static dependency and call design rather than time dependent execution monitoring. (I use Python if you have any specialized recommendations).

    Read the article

  • Sending object C from class A to class B

    - by user278618
    Hi, I can't figure out how to design classes in my system. In classA I create object selenium (it simulates user actions at website). In this ClassA I create another objects like SearchScreen, Payment_Screen and Summary_Screen. # -*- coding: utf-8 -*- from selenium import selenium import unittest, time, re class OurSiteTestCases(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 5555, "*chrome", "http://www.someaddress.com/") time.sleep(5) self.selenium.start() def test_buy_coffee(self): sel = self.selenium sel.open('/') sel.window_maximize() search_screen=SearchScreen(self.selenium) search_screen.choose('lavazza') payment_screen=PaymentScreen(self.selenium) payment_screen.fill_test_data() summary_screen=SummaryScreen(selenium) summary_screen.accept() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() It's example SearchScreen module: class SearchScreen: def __init__(self,selenium): self.selenium=selenium def search(self): self.selenium.click('css=button.search') I want to know if there is anything ok with a design of those classes?

    Read the article

  • Writing Great Software

    - by 01010011
    Hi, I'm currently reading Head First's Object Oriented Analysis and Design. The book states that to write great software (i.e. software that is well-designed, well-coded, easy to maintain, reuse, and extend) you need to do three things: Firstly, make sure the software does everything the customer wants it to do Once step 1 is completed, apply Object Oriented principles and techniques to eliminate any duplicate code that might have slipped in Once steps 1 and 2 are complete, then apply design patterns to make sure the software is maintainable and reusable for years to come. My question is, do you follow these steps when developing great software? If not, what steps do you usually follow inorder to ensure it's well designed, well-coded, easy to maintain, reuse and extend?

    Read the article

  • What is the right way to implement communication between java objects?

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes, each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • The right way to implement communication between java objects

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that I has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • Best and easiest algorithm to search for a vertex on a Graph?

    - by Nazgulled
    Hi, After implementing most of the common and needed functions for my Graph implementation, I realized that a couple of functions (remove vertex, search vertex and get vertex) don't have the "best" implementation. I'm using adjacency lists with linked lists for my Graph implementation and I was searching one vertex after the other until it finds the one I want. Like I said, I realized I was not using the "best" implementation. I can have 10000 vertices and need to search for the last one, but that vertex could have a link to the first one, which would speed up things considerably. But that's just an hypothetical case, it may or may not happen. So, what algorithm do you recommend for search lookup? Our teachers talked about Breadth-first and Depth-first mostly (and Dikjstra' algorithm, but that's a completely different subject). Between those two, which one do you recommend? It would be perfect if I could implement both but I don't have time for that, I need to pick up one and implement it has the first phase deadline is approaching... My guess, is to go with Depth-first, seems easier to implement and looking at the way they work, it seems a best bet. But that really depends on the input. But what do you guys suggest?

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • How is this algorithm, for finding maximum path on a Directed Acyclical Graph, called?

    - by Martín Fixman
    Since some time, I'm using an algorithm that runs in complexity O(V + E) for finding maximum path on a Directed Acyclical Graph from point A to point B, that consists on doing a flood fill to find what nodes are accessible from note A, and how many "parents" (edges that come from other nodes) each node has. Then, I do a BFS but only "activating" a node when I already had used all its "parents". queue <int> a int paths[] ; //Number of paths that go to note i int edge[][] ; //Edges of a int mpath[] ; //max path from 0 to i (without counting the weight of i) int weight[] ; //weight of each node mpath[0] = 0 a.push(0) while not empty(a) for i in edge[a] paths[i] += 1 a.push(i) while not empty(a) for i in children[a] mpath[i] = max(mpath[i], mpath[a] + weight[a]) ; paths[i] -= 1 ; if path[i] = 0 a.push(i) ; Is there any special name for this algorithm? I told it to an Informatics professor, he just called it "Maximum Path on a DAG", but it doesn't sound good when you say "I solved the first problem with a Fenwick Tree, the second with Dijkstra, and the third with Maximum Path".

    Read the article

  • OPTICS Clustering algorithm. How to get the best epsilon

    - by Marco Galassi
    I am implementing a project which needs to cluster geographical points. OPTICS algorithm seems to be a very nice solution. It needs just 2 parameters as input(MinPts and Epsilon), which are, respectively, the minimum number of points needed to consider them as a cluster, and the distance value used to compare if two points are in can be placed in same cluster. My problem is that, due to the extreme variety of the points, I can't set a fixed epsilon. Just look at the image below. The same points structure but in a different scale would result very different. Suppose to set MinPts=2 and epsilon = 1Km. On the left, the algorithm would create 2 clusters(red and blue), but on the right it would create one single cluster containing all of the points(red), but I would like to obtain 2 clusters even on the right. So my question is: is there any kind of way to calculate dynamically the epsilon value to get this result? Thank you very much and excuse my for my poor english. Marco

    Read the article

  • 2D vector value replacement using classes; genetic algorithm mutation

    - by gcolumbus
    I have a 2D vector as defined by the classes below. Note that I've used classes because I'm trying to program a genetic algorithm such that many, many 2D vectors will be created and they will all be different. class Quad: public std::vector<int> { public: Quad() : std::vector<int>(4,0) {} }; class QuadVec : public std::vector<Quad> { }; An important part of my algorithm, however, is that I need to be able to "mutate" (randomly change) particular values in a certain number of randomly chosen 2D vectors. This has me stumped. I can easily write code to randomly select the value within the 2D vector that will be selected for "mutation" but how do I actually enact that change using classes? I understand how this would be done with one 2D vector that has already been initialized but how do I do this if it hasn't? Please let me know if I haven't provided enough info or am not clear as I tend to do that. Thanks for your time and help!

    Read the article

  • collsion issues with quadtree [on hold]

    - by QuantumGamer
    So i implemented a Quad tree in Java for my 2D game and everything works fine except for when i run my collision detection algorithm, which checks if a object has hit another object and which side it hit.My problem is 80% of the time the collision algorithm works but sometimes the objects just go through each other. Here is my method: private void checkBulletCollision(ArrayList object) { quad.clear(); // quad is the quadtree object for(int i=0; i < object.size();i++){ if(object.get(i).getId() == ObjectId.Bullet) // inserts the object into quadtree quad.insert((Bullet)object.get(i)); } ArrayList<GameObject> returnObjects = new ArrayList<>(); // Uses Quadtree to determine to calculate how many // other bullets it can collide with for(int i=0; i < object.size(); i++){ returnObjects.clear(); if(object.get(i).getId() == ObjectId.Bullet){ quad.retrieve(returnObjects, object.get(i).getBoundsAll()); for(int k=0; k < returnObjects.size(); k++){ Bullet bullet = (Bullet) returnObjects.get(k); if(getBoundsTop().intersects(bullet.getBoundsBottom())){ vy = speed; bullet.vy = -speed; } if(getBoundsBottom().intersects(bullet.getBoundsTop())){ vy = -speed; bullet.vy = speed; } if(getBoundsLeft().intersects(bullet.getBoundsRight())){ vx =speed; bullet.vx = -speed; } if(getBoundsRight().intersects(bullet.getBoundsLeft())){ vx = -speed; bullet.vx = speed; } } } } } Any help would be appreciated. Thanks in advance.

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • Drawing particles as a smooth blob

    - by Nömmik
    I'm new to game/graphics development and I'm playing around with particles (in 2D). I want to draw particles close to each other as a blob, just as liquid/water. I do not want to draw big circles overlapping as the blob won't be smooth (and too big). I don't really know physics but I assume what I want is something looking similar to surface tension. I haven't been able to find anything on stackexchange or on Google (maybe I do not know the correct keywords?). So far I have found two possible solutions, but I am unable to find any concrete information about algorithms. One of them is to calculate the concave hull of particles I consider being a blob. I can calculate the blob by creating an equivalence class (on the relation "close to each other"). Strangely enough I haven't been able to find any algorithm explaining how to calculate the concave hull. Many posts (and among stackexchange) links to libraries or commercial products that do this (I need libraries to work in C#), but never any algorithm. Also this solution might have a problem with a circle of particles, which would not detect the empty space in the middle. While researching concave hull I stumbled upon something called alpha shapes. Which seems to be exactly what I want to do, however just as with concave hull I haven't found any source explaining how they actually work. I have found some presentation materials but not enough to go on. It's like a big secret everyone knows except me :-/ After calculating the concave hull or alpha shape I want to make it a Bézier curve to make it smooth and nice. Although I do find my approach a bit too complex, maybe I am trying to solve this the wrong way? If you can either suggest any other solution to my problem, or explain the pieces I am missing I would be very happy and grateful :-) Thanks.

    Read the article

  • best way to compute vertex normals from a Triangle's list

    - by nkint
    hi i'm a complete newbie in computergraphics so sorry if it's a stupid answer. i'm trying to make a simple 3d engine from scratch, more for educational purpose than for real use. i have a Surface object with inside a Triangle's list. For now i compute normals inside Triangle class, in this way: triangle.computeFaceNormals() { Vec3D u = v1.sub(v3) Vec3D v = v1.sub(v2) Vec3D normal = Vec3D.cross(u,v) normal.normalized() this.n1 = this.n2 = this.n3 = normal } and when building surface: t = new Triangle(v1,v2,v3).computeFaceNormals() surface.addTriangle(t) and i think this is the best way to do that.. isn't it? now.. what about for vertex normals? i've found this simple algorithm: flipcode vertex normal but.. hei this algorithm has.. exponential complexity? (if my memory doesn't fail my computer science background..) (bytheway.. it has 3 nested loops.. i don't think it's the best way to do it..) any suggestion?

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >