Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 47/231 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • SAS(Statistical Analysis System) Career As a computer science student

    - by Renju
    Hi. I have completed MSc in Computer science this academic year. So I am fresher... While I am doing graduation and post graduation I did many projects using PHP and MySQL. Now I got opportunity to get into SAS(Statistical Analysis System) career, and I heard that SAS having better career growth than PHP developement. For the past 4 days, I was working with SAS and I feel interested in working. My questions are, Since i am a computer science student whether i will have any problem in my career growth in SAS? I am ready to learn statistics also, is there anything else I have to do? Doing certification in SAS will make any changes? Is it a bad idea to get into SAS with only CSc backgrond? So please guide me for my career...

    Read the article

  • Any recommended VC++ settings for better PDB analysis on release builds

    - by Brian R. Bondy
    Are there any VC++ settings I should know about to generate better PDB files that contain more information? I have a crash dump analysis system in place based on the project crashrpt. Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.

    Read the article

  • What are the differences between MVP, Presentation Model, MVVM and MVC?

    - by Nicholas
    I have a pretty good idea how each of these patterns work some of the minor differences between them, but are they really all that different from each other? It seems to me that the Presenter, Presentation Model, ViewModel and Controller are essentially the same concept. Why couldn't I classify all of these concepts as controllers? I feel like it might simplify the entire idea a great deal. Can anyone give a clear description of their differences? I want to clarify that I do understand how the patterns work, and have implemented most of them in one technology or another. What I am really looking for is someone's experience with one of these patterns, and why they would not consider their ViewModel a controller for instance.

    Read the article

  • Django: Why Doesn't the Current URL Match any Patterns in urls.py

    - by austin_sherron
    I've found a few questions here related to my issue, but I haven't found anything that has helped me resolve my issue. I'm using Python 2.7.5 and Django 1.8.dev20140627143448. I have a view that's interacting with my database to delete objects, and it takes two arguments in addition to a request: def delete_data_item(request, dataclass_id, dataitem_id): form = AddDataItemForm(request.POST) data_set = get_object_or_404(DataClass, pk=dataclass_id) context = {'data_set': data_set, 'form': form} data_item = get_object_or_404(DataItem, pk=dataitem_id) data_item.delete() data_set.save() return HttpResponseRedirect(reverse('detail', args=(dataclass_id,))) The URL in myapp.urls.py looks something like this: url(r'^(?P<dataclass_id>[0-9]+)/(?P<dataitem_id>[0-9]+)/delete_data_item/$', views.delete_data_item, name='delete_data_item') and the portion of my template relevant to the view is: <a href="{% url 'delete_data_item' data_set.id data_item.id %}">DELETE</a> Whenever I click on the DELETE link, django tells me that the request URL: http://127.0.0.1:8000/myapp/5/%7B%%20url%20'delete_data_item'%20data_set.id%20data_item.id%20%%7D doesn't match any of my URL patterns. What am I missing? The URL on which the DELETE links exist is myapp/(<dataclass_id>[0-9]+)/

    Read the article

  • Replace conditional with polymorphism refactoring or similar?

    - by Anders Svensson
    Hi, I have tried to ask a variant of this question before. I got some helpful answers, but still nothing that felt quite right to me. It seems to me this shouldn't really be that hard a nut to crack, but I'm not able to find an elegant simple solution. (Here's my previous post, but please try to look at the problem stated here as procedural code first so as not to be influenced by the earlier explanation which seemed to lead to very complicated solutions: http://stackoverflow.com/questions/2772858/design-pattern-for-cost-calculator-app ) Basically, the problem is to create a calculator for hours needed for projects that can contain a number of services. In this case "writing" and "analysis". The hours are calculated differently for the different services: writing is calculated by multiplying a "per product" hour rate with the number of products, and the more products are included in the project, the lower the hour rate is, but the total number of hours is accumulated progressively (i.e. for a medium-sized project you take both the small range pricing and then add the medium range pricing up to the number of actual products). Whereas for analysis it's much simpler, it is just a bulk rate for each size range. How would you be able to refactor this into an elegant and preferably simple object-oriented version (please note that I would never write it like this in a purely procedural manner, this is just to show the problem in another way succinctly). I have been thinking in terms of factory, strategy and decorator patterns, but can't get any to work well. (I read Head First Design Patterns a while back, and both the decorator and factory patterns described have some similarities to this problem, but I have trouble seeing them as good solutions as stated there. The decorator example seems very complicated for just adding condiments, but maybe it could work better here, I don't know. And the factory pattern example with the pizza factory...well it just seems to create such a ridiculous explosion of classes, at least in their example. I have found good use for factory patterns before, but I can't see how I could use it here without getting a really complicated set of classes) The main goal would be to only have to change in one place (loose coupling etc) if I were to add a new parameter (say another size, like XSMALL, and/or another service, like "Administration"). Here's the procedural code example: public class Conditional { private int _numberOfManuals; private string _serviceType; private const int SMALL = 2; private const int MEDIUM = 8; public int GetHours() { if (_numberOfManuals <= SMALL) { if (_serviceType == "writing") return 30 * _numberOfManuals; if (_serviceType == "analysis") return 10; } else if (_numberOfManuals <= MEDIUM) { if (_serviceType == "writing") return (SMALL * 30) + (20 * _numberOfManuals - SMALL); if (_serviceType == "analysis") return 20; } else //i.e. LARGE { if (_serviceType == "writing") return (SMALL * 30) + (20 * (MEDIUM - SMALL)) + (10 * _numberOfManuals - MEDIUM); if (_serviceType == "analysis") return 30; } return 0; //Just a default fallback for this contrived example } } All replies are appreciated! I hope someone has a really elegant solution to this problem that I actually thought from the beginning would be really simple... Regards, Anders

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

  • C# performance analysis- how to count CPU cycles?

    - by Lirik
    Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting: class PerformanceTest { static double last = 0.0; static List<object> numericGenericData = new List<object>(); static List<double> numericTypedData = new List<double>(); static void Main(string[] args) { double totalWithCasting = 0.0; double totalWithoutCasting = 0.0; for (double d = 0.0; d < 1000000.0; ++d) { numericGenericData.Add(d); numericTypedData.Add(d); } Stopwatch stopwatch = new Stopwatch(); for (int i = 0; i < 10; ++i) { stopwatch.Start(); testWithTypecasting(); stopwatch.Stop(); totalWithCasting += stopwatch.ElapsedTicks; stopwatch.Start(); testWithoutTypeCasting(); stopwatch.Stop(); totalWithoutCasting += stopwatch.ElapsedTicks; } Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10)); Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10)); Console.ReadKey(); } static void testWithTypecasting() { foreach (object o in numericGenericData) { last = ((double)o*(double)o)/200; } } static void testWithoutTypeCasting() { foreach (double d in numericTypedData) { last = (d * d)/200; } } } The output is: Avg with typecasting = 468872.3 Avg without typecasting = 501157.9 I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

    Read the article

  • Session State with MVP and Application Controller patterns

    - by Graham Bunce
    Hi, I've created an MVP (passive view) framework for development and decided to go for an Application Controller pattern to manage the navigation between views. This is targeted at WinForms, ASP.NET and WPF interfaces. Although I'm not 100% convinced that these view technologies really swappable, that's my aim at the moment so my MVP framework is quite lightweight. What I'm struggling to fit in is the concept of a "Business Conversation" that needs state information to be either (a) maintained for the lifetime of the View or, more likely, (b) maintained across several views for the lifetime of a use case (business conversation). I want state management to be part of the framework as I don't want developers to worry about it. All they need to do is to "start" a conversation, "Register" objects and the framework does the rest until the "end" a conversation. Has anybody got any thoughts (patterns) to how to fit this into MVP? I was thinking it may be part of the Application Controller responsibility (delegating to a Conversation Manager object) as it knows about current state in order to send the user to the next view.... but then I thought it may be up to the Presenter to start and end the conversation so then it comes down the presenters to manage conversations and the objects registered for the that conversation. Unfortunately that means presenters can't be used in different conversations... so that idea doesn't seem right. As you can see, I don't think there is an easy answer (and I've looked for a while). So anybody else got any thoughts?

    Read the article

  • Sentiment analysis for twitter in python

    - by Ran
    I'm looking for an open source implementation, preferably in python, of Textual Sentiment Analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). Is anyone familiar with such open source implementation I can use? I'm writing an application that searches twitter for some search term, say "youtube", and counts "happy" tweets vs. "sad" tweets. I'm using Google's appengine, so it's in python. I'd like to be able to classify the returned search results from twitter and I'd like to do that in python. I haven't been able to find such sentiment analyzer so far, specifically not in python. Are you familiar with such open source implementation I can use? Preferably this is already in python, but if not, hopefully I can translate it to python. Note, the texts I'm analyzing are VERY short, they are tweets. So ideally, this classifier is optimized for such short texts. BTW, twitter does support the ":)" and ":(" operators in search, which aim to do just this, but unfortunately, the classification provided by them isn't that great, so I figured I might give this a try myself. Thanks! BTW, an early demo is here and the code I have so far is here and I'd love to opensource it with any interested developer.

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • exclude dependencies when running sonar analysis

    - by achraf
    I have a test project requiring some heavy jars which i put in ${M2_HOME}\test\src\main\resources\ and add them in the pom.xml using : <dependency> <groupId>server</groupId> <artifactId>server</artifactId> <version>1.0</version> <scope>system</scope> <systemPath>${M2_HOME}\test\src\main\resources\server.jar</systemPath> </dependency> <dependency> <groupId>client</groupId> <artifactId>client</artifactId> <version>6.0</version> <scope>system</scope> <systemPath>${M2_HOME}\test\src\main\resources\client.jar</systemPath> </dependency> I want to know if it possible to exclude them during sonar analysis, or generally just analyze java sources folder.

    Read the article

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

  • Design patterns for Agent / Actor based concurrent design.

    - by nso1
    Recently i have been getting into alternative languages that support an actor/agent/shared nothing architecture - ie. scala, clojure etc (clojure also supports shared state). So far most of the documentation that I have read focus around the intro level. What I am looking for is more advanced documentation along the gang of four but instead shared nothing based. Why ? It helps to grok the change in design thinking. Simple examples are easy, but in a real world java application (single threaded) you can have object graphs with 1000's of members with complex relationships. But with agent based concurrency development it introduces a whole new set of ideas to comprehend when designing large systems. ie. Agent granularity - how much state should one agent manage - implications on performance etc or are their good patterns for mapping shared state object graphs to agent based system. tips on mapping domain models to design. Discussions not on the technology but more on how to BEST use the technology in design (real world "complex" examples would be great).

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • visual analysis of web pages in ruby

    - by Clint Miller
    I'm looking to write some code that does visual analysis of web pages, preferably using Ruby. My code will need to be able to determine the top, left, width, height, background color, color, and font size for all the elements in the DOM. Of course, these values can only be calculated once all CSS is applied. So, I don't think that Nokogiri is up for the job. Ultimately, I'm trying to use this data in a VIPS-like (Vision-Based Page Segmentation) algorithm in an attempt to find the main content in downloaded news articles. I've considered using Watir to drive Chrome or Firefox and then extract the data. The problem is that browsers can't be run headless through Watir (I think). Ultimately, this code will be running on an array of Linux servers in a data center. So, the code won't have easy access to an X Server for displaying the browser. I suppose one solution is to use Watir and run a headless X Server on the Linux servers. That's a bit of a pain, but it looks like my best option right now. Does anyone have any better ideas?

    Read the article

  • Python, web log data mining for frequent patterns

    - by descent
    Hello! I need to develop a tool for web log data mining. Having many sequences of urls, requested in a particular user session (retrieved from web-application logs), I need to figure out the patterns of usage and groups (clusters) of users of the website. I am new to Data Mining, and now examining Google a lot. Found some useful info, i.e. querying Frequent Pattern Mining in Web Log Data seems to point to almost exactly similar studies. So my questions are: Are there any python-based tools that do what I need or at least smth similar? Can Orange toolkit be of any help? Can reading the book Programming Collective Intelligence be of any help? What to Google for, what to read, which relatively simple algorithms to use best? I am very limited in time (to around a week), so any help would be extremely precious. What I need is to point me into the right direction and the advice of how to accomplish the task in the shortest time. Thanks in advance!

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >