Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 89/231 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • What Design Pattern To Replace 'CurrentStep' Variable

    - by Rob P.
    Tried to search but didn't know how to phrase it. I've got some code that is essentially... Private CurMajorStep as Integer = 0 Private CurMinorStep as Integer = 0 Public Sub PerformNextStep() Select Case iMajorStep Case 0 ThingOne() Case 1 ThingTwo() Case 2 ThingThree() Case 3 ThingFour() Case 4 AnythingElse() Case 5 Finish() End Select End Sub And then, in some of those, the CurMinorStep keeps track of the current state of that particular 'step'. I hope that all makes sense. The code is messy and I know it's going to be problematic to maintain. Can someone point me to a clean OO pattern to handle this?

    Read the article

  • What exactly is GRASP's Controller about?

    - by devoured elysium
    What is the idea behind Grasp's Controller pattern? My current interpretation is that sometimes you want to achieve something that needs to use a couple of classes but none of those classes could or has access to the information needed to do it, so you create a new class that does the job, having references to all the needed classes(this is, could be the information expert). Is this a correct view of what Grasp's Controller is about? Generally when googling or SO'ing controller, I just get results about MVC's (and whatnot) which are topics that I don't understand about, so I'd like answers that don't assume I know ASP.NET's MVC or something :( Thanks

    Read the article

  • The right way to implement communication between java objects

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that I has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • Long list of if statements in Java

    - by steve
    Hi, Sorry I can't find a question answering this, I'm almost certain someone else has raised it before. My problem is that I'm writing some system libraries to run embedded devices. I have commands which can be sent to these devices over radio broadcasts. This can only be done by text. inside the system libraries I have a thread which handles the commands which looks like this if (value.equals("A") { doCommandA() } else if (value.equals"B") { doCommandB() } else if etc. The problem is that there are a lot of commands to it will quickly spiral to something out of control. Horrible to look out, painful to debug and mind boggling to understand in a few months time.

    Read the article

  • How does ruby allow a method and a Class with the same name?

    - by Daniel Beardsley
    I happened to be working on a Singleton class in ruby and just remembered the way it works in factory_girl. They worked it out so you can use both the long way Factory.create(...) and the short way Factory(...) I thought about it and was curious to see how they made the class Factory also behave like a method. They simply used Factory twice like so: def Factory (args) ... end class Factory ... end My Question is: How does ruby accomplish this? and Is there danger in using this seemingly quirky pattern?

    Read the article

  • How does Undo work?

    - by dontWatchMyProfile
    How does undo work? Does it copy all the managed objects every time any of the values change? Or does it only copy the actual changes together with an information which objects were affected? Is that heavy or lightweight?

    Read the article

  • Good JDBC pattern

    - by Java Developer
    What is the good practice for database operation in Java application? Do you construct the DML syntax in the Java code and send the statements to DB engine for execution, or you just collect the parameters and then make a call to stored procedure with the parameters via java code? or neither because that's just not how to do it? can anyone give an example of a full database utility classes to do database operations in Java app? also what about the transaction manager? My assignment is to make database operation that is modular in Java. Thanks

    Read the article

  • Question About Classic MVC

    - by kernix
    Hello, In classic MVC the model notifies the view about changes made on it. In C# this means I have to subclass the View I'm interested in and in the subclassed class register to the model's event. For example, if I were to implement MVC using C# and Winforms, I had to subclass TextBox class and then register inside the MyTextBox's constructor for the model events. Am I correct? How was this issued in Smalltalk? Does one also need to subclass every View in order to register the model's events, or is there some way to dynamically add events to the views on the fly? Thanks

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • [WPF] When Should I Retrieve Values from Textbox?

    - by they_soft
    Suppose I have a Window with TextBoxes I want to use the values. Right now I'm thinking of either: 1) Updating each associated value once the cursor is out of focus, and once the user presses Ok I start the program 2) Once the user presses Ok, I retrieve all the values at once then start the program I'm not sure which one is better though. First alternative seems more modular, but there's more semantic coupling since I each new box is supposed to be updating its respective value. I realize this isn't all that important, but I'm trying to understand when to centralize and when not to. Other better approachers are appreciated too.

    Read the article

  • What are some Java patterns well-suited for fast, algorithmic coding?

    - by Casey Chu
    I'm in college, and I've recently started competing in programming competitions with my friends. These competitions involve solving algorithmic problems quickly. It's a lot of fun, but there's one problem: I'm forced to use Java. (My teammates use Java.) Background: I'm a self-taught JavaScript programmer, and it hurts to write Java code. I find it very verbose and inflexible, and I feel slowed down when having to declare types and decide which of the eighty list data structure to use. I'm also frustrated about the lack of functional programming features and how verbose using regular expressions, arrays, and dictionaries are. As an example, consider the problem of finding the length of the longest string of consecutive characters in a given string. So the string XX22BBBBccXX222 would give 4, for the string of four Bs. In Java, I'd have to loop through and manually count characters and manually keep track of the maximum. (That's at least as far as I'm aware -- I'm not as familiar with Java as I am with JavaScript.) In JavaScript, I'd find it like this: var max = Math.max.apply(Math, str.match(/(.)\1*/g).map(function (s) { return s.length; })); Much quicker and simpler, in my book. The question: what are some Java features, techniques, or patterns well-suited for fast, algorithmic coding?

    Read the article

  • Should I amortize scripting cost via bytecode analysis or multithreading?

    - by user18983
    I'm working on a game sort of thing where users can write arbitrary code for individual agents, and I'm trying to decide the best way to divide up computation time. The simplest option would be to give each agent a set amount of time and skip their turn if it elapses without an action being decided upon, but I would like people to be able to write their agents decision functions without having to think too much about how long its taking unless they really want to. The two approaches I'm considering are giving each agent a set number of bytecode instructions (taking cost into account) each timestep, and making players deal with the consequences of the game state changing between blocks of computation (as with Battlecode) or giving each agent it's own thread and giving each thread equal time on the processor. I'm about equally knowledgeable on both concurrency and bytecode stuff, which is to say not very, so I'm wondering which approach would be best. I have a clearer idea of how I'd structure things if I used bytecode, but less certainty about how to actually implement the analysis. I'm pretty sure I can work up a concurrency based system without much trouble, but I worry it will be messier with more overhead and will add unnecessary complexity to the project.

    Read the article

  • Performance comparison of Dictionaries

    - by Hun1Ahpu
    I'm interested in performance values (big-O analysis) of Lookup and Insert operation for .Net Dictionaries: HashTable, SortedList, StringDictionary, ListDictionary, HybridDictionary, NameValueCollection Link to a web page with the answer works for me too.

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • Critiquing PHP-code / PerlCritic for PHP?

    - by jeekl
    I'm looking for an equivalent of PerlCritic for PHP. PerlCritc is a static source code analyzer that qritiques code and warns about everything from unused variables, to unsafe ways to handle data to almost anything. Is there such a thing for PHP that could (preferably) be run outside of an IDE, so that source code analysis could be automated?

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • What consequences to take from what i read in logfiles?

    - by Helene Bilbo
    Since some weeks i manage my first Webserver, a Seaside application behind an Apache proxy on Linode, and i installed logwatch to send me daily logs. Where can i get information on when i have to act as a consequence of what i read in these logwatch reports? For example i read that all kinds of people try to login on funny nonexisting accounts or all kinds of webcrawlers test for nonexisting cms login pages, some ip adresses get banned and unbanned by fail2ban... I assume that's normal? Is it? But how do i know that i probably have to do something? What do i look for in the logs?

    Read the article

  • What is the best/easiest way to use scripts to analyze network traffic?

    - by yungin
    I'm looking to analyze packets via scripts. I'd like to use something high level. I'm in a mac/linux environment. I'm currently looking at different python+libpcap libraries. Perhaps lua+wireshark too. Maybe tcpdump+bash (but not sure that has a lot of info i can use). I also heard good things about scapy. Not sure. I'm wondering if you have any recommendations? There's quite a few of them out there. What have you found that works best? I'd definitely want something scriptable not something that I need to compile (like c/c++, etc)

    Read the article

  • Log application changes made to the system

    - by Maxim Veksler
    Hello, Windows 7, 64bit. I have an application which I don't trust but still need to run. I would like to run the installer of this application and later on the installed executable under some kind of "strace" for windows which will record what this application did to the system. Mainly: What files have been created / edited? What registery changed have been made? To what network hosts did the application tried to communicate? Ideally I would also be able to generate a "UNDO" action to undo all the changes. Please don't suggest full Virtualization solutions such as Virtualbox, VMWare and co. because the application should run in the host system (A "sandbox" approach will OTHO be accepted, IMHO). Do you any such utility I can use? Thank you, Maxim.

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >