Search Results

Search found 442 results on 18 pages for 'the naive'.

Page 2/18 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Newbie questions about COM

    - by smwikipedia
    Hi, I am quite new to COM so the question may seem naive. Q1. About Windows DLL Based on my understanding, a Windows DLL can export functions, types(classes) and global variables. Is this understanding all right? Q2. About COM My naive understanding is that: a COM DLL seems to be just a new logical way to organize the functions and types exported by a standard Windows DLL. A COM DLL exports both functions such as DllRegisterServer() and DllGetClassObject(), and also the Classes which implements the IUnknown interface. Is this understanding all right? Q3. *.def & *.idl *.def is used to define the functions exported by a Windows DLL in the traditional way, such as DllGetClassObject(). *.idl is used to define the interface implemented by a COM coclass. Thanks in advance.

    Read the article

  • Communication between (Desktop/Smartphone) Application and Web Application

    - by erlord
    Hi all I wonder what is the commonly used method and protocol for a, say, smartphone application to send its data to a web application (where these data should be proceeded and stored). My naive beginner's approach would be something like this: From my smartphone app, use a framework encoding my data object into a json object Send this object via http to a listener addressed by a dedicated URL On the server side, use a JSON parser and store it via ORM etc. into a database Questions: Is this too naive? 2a. If yes: What is the appropiate way to tackle this workflow? 2b. If no: What frameworks to use in JAVA for serializing/deserializing to/from JSON objects? Any example for a json listener in the web? Tutorial for a http-protocol based Java listener? Thanks for all answers and suggestions in advance. Regs Me

    Read the article

  • Construct A Polygon Out of Union of Many Polygons

    - by Ngu Soon Hui
    Supposed that I have many polygons, what is the best algorithm to construct a polygon--maybe with holes- out of the union of all those polygons? For my purpose, you can imagine each piece of a polygon as a jigsaw puzzle piece, when you complete them you will get a nice picture. But the catch is that a small portion <5% of the jigsaw is missing, and you are still require to form a picture as complete as possible; that's the polygon-- maybe with holes-- that I want to form. My naive approach is to take two polygons, union them, and take another polygon, union it with the union of the two polygons, and repeat this process until every single piece is union. Then I will run through the union polygon list and check whether there are still some polygons can be combined, and I will repeat this process until a satisfactory result is achieved. But this seems to be like an extremely naive approach. I just wonder is there any other better algorithm?

    Read the article

  • Detecting wins in peer to peer RTS games like Starcraft

    - by user782220
    A typical RTS game is implemented with the standard networking model: peer to peer lockstep. Consider Starcraft 2, given that Battle.net presumably doesn't know anything about the state of game given that there is only communication between the two players in a peer to peer model, how does Battle.net know who was the winner in the end. Relying on the two peers to not try to cheat and report accurate results is naive.

    Read the article

  • Advice for a getting a job in algorithmic trading - writing faster code

    - by Alex
    I am currently an intermediate Java developer working in the financial industry. I am considering trying to get into an algorithmic trading developer position. I am looking for any advice/resources that may help me obtain such a job. My naive initial thoughts are to concentrate on learning how to write faster, more memory efficient code whilst maintaining readability. Can anyone point me in the right direction of some useful resources for what I am aiming to achieve?

    Read the article

  • Is there any hueristic to polygonize a closed 2d-raster shape with n triangles?

    - by Arthur Wulf White
    Lets say we have a 2d image black on white that shows a closed geometric shape. Is there any (not naive brute force) algorithm that approximates that shape as closely as possible with n triangles? If you want a formal definition for as closely as possible: Approximate the shape with a polygon that when rendered into a new 2d image will match the largest number of pixels possible with the original image.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • category theory based language

    - by pagoda_5b
    It may sound naive, but is there any programming language, or research thereof, based entirely on category theory? I mean this as opposed to embedding CT concepts as an additional feature (like for Haskell or scala). Would it be too abstract or too complex as an approach, or are there any known reasons that makes it impossible or impractical? I have only a relative understanding of the theory as related to programming, so please give me some explanation if the question doesn't makes sense at all

    Read the article

  • cracked dreamweaver and photoshop - private & professional use [closed]

    - by Céline Chevalier
    I am thinking of downloading the cracked versions of dreamweaver and photoshop. Am planning to do some private projects but also use it when working as a developer (professionally). Is it risky and likely to get caught? How? Not sure if they hack into my pc, find out who I am and accuse me once it is clear that I am (or my pc is) using it, or is this just naive thinking of a new inexperienced web developer?

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • Efficient method to calculate the rank vector of a list in Python

    - by Tamás
    I'm looking for an efficient way to calculate the rank vector of a list in Python, similar to R's rank function. In a simple list with no ties between the elements, element i of the rank vector of a list l should be x if and only if l[i] is the x-th element in the sorted list. This is simple so far, the following code snippet does the trick: def rank_simple(vector): return [rank for rank in sorted(range(n), key=vector.__getitem__)] Things get complicated, however, if the original list has ties (i.e. multiple elements with the same value). In that case, all the elements having the same value should have the same rank, which is the average of their ranks obtained using the naive method above. So, for instance, if I have [1, 2, 3, 3, 3, 4, 5], the naive ranking gives me [0, 1, 2, 3, 4, 5, 6], but what I would like to have is [0, 1, 3, 3, 3, 5, 6]. Which one would be the most efficient way to do this in Python? Footnote: I don't know if NumPy already has a method to achieve this or not; if it does, please let me know, but I would be interested in a pure Python solution anyway as I'm developing a tool which should work without NumPy as well.

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Why can a public class not inherit from a less visible one?

    - by Dan Tao
    I apologize if this question has been asked before. I've searched SO somewhat and wasn't able to find it. I'm just curious what the rationale behind this design was/is. Obviously I understand that private/internal members of a base type cannot, nor should they, be exposed through a derived public type. But it seems to my naive thinking that the "hidden" parts could easily remain hidden while some base functionality is still shared and a new interface is exposed publicly. I'm thinking of something along these lines: Assembly X internal class InternalClass { protected virtual void DoSomethingProtected() { // Let's say this method provides some useful functionality. // Its visibility is quite limited (only to derived types in // the same assembly), but at least it's there. } } public class PublicClass : InternalClass { public void DoSomethingPublic() { // Now let's say this method is useful enough that this type // should be public. What's keeping us from leveraging the // base functionality laid out in InternalClass's implementation, // without exposing anything that shouldn't be exposed? } } Assembly Y public class OtherPublicClass : PublicClass { // It seems (again, to my naive mind) that this could work. This class // simply wouldn't be able to "see" any of the methods of InternalClass // from AssemblyX directly. But it could still access the public and // protected members of PublicClass that weren't inherited from // InternalClass. Does this make sense? What am I missing? }

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • Is xterm the terminal window we open in Ubuntu

    - by blog.adaptivesoftware.biz
    I know this is a very naive question. I was reading somewhere that Linux allows 7 xterm's. However, I can start more than 7 terminal apps from my Ubuntu system (Application - Accessories - Terminal). There is definetely a hole in my knowledge... will help if someone helped me understand the difference between an xterm and the Terminal application in a Linux distribution such as Ubuntu.

    Read the article

  • how to create a lotus notes view that shows mails from last two days?

    - by Hemal Pandya
    How do I create a view in Lotus Notes that shows recent mails, say mails received in last two days? My naive attempt was to Create a view with Simple Search date created is in the last 2 days. While this pulls in newer mails it does not seem to clear out old mails. Which means the view contains all mails received since two days before the view was created. What is the right way to do this?

    Read the article

  • eCryptFS: How to mount a backup of an encrypted home dir?

    - by Boldewyn
    I use eCryptFS to encrypt the home directory of my laptop. My backup script copies the encrypted files to a server (together with everything else in (home/.ecryptfs). How can I mount the encrypted files of the backup? I'd like to verify that I can do that, and that everything is in place. My naive try with mount -t ecryptfs /backup/home/.ecryptfs/boldewyn /mnt/test didn't work, eCryptFS wanted to create a new partition.

    Read the article

  • eCryptFS: How to mount a backup of an encrypted home dir?

    - by Boldewyn
    I use eCryptFS to encrypt the home directory of my laptop. My backup script copies the encrypted files to a server (together with everything else in (home/.ecryptfs). How can I mount the encrypted files of the backup? I'd like to verify that I can do that, and that everything is in place. My naive try with mount -t ecryptfs /backup/home/.ecryptfs/boldewyn /mnt/test didn't work, eCryptFS wanted to create a new partition.

    Read the article

  • OpenGL or OpenGL ES

    - by zxspectrum
    What should I learn? OpenGL 4.1 or OpenGL ES 2.0? I will be developing desktop applications using Qt but I may start developing mobile applications in a few months, too. I don't know anything about 3D, 3D math, etc and I'd rather spend 100 bucks in a good book than 1 week digging websites and going through trial and error. One problem I see with OpenGL 4.1 is as far as I know there is no book yet (the most recent ones are for OpenGL 3.3 or 4.0), while there are books on OpenGL ES 2.0. On the other hand, from my naive point of view, OpenGL 4.1 seems like OpenGL ES 2.0 + additions, so it looks like it would be easier/better to first learn OpenGL ES 2.0, then go for the shader language, etc Please, don't tell me to use NeHe (it's generally agreed it's full of bad/old practices), the Durian tutorial, etc. Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >