Search Results

Search found 5751 results on 231 pages for 'analysis patterns'.

Page 111/231 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • MikTeX 2.8 doesn't add hyphenation support for pdfLaTeX

    - by SztupY
    I'm using MikTeX 2.8 edition, and installed the hungarian language support and hyphenation files. Using the standard LaTeX command they work fine, but when I try to use pdfLaTeX, they don't get loaded and I get the (C:\stuff\miktex\tex\generic\babel\magyar.ldf (C:\stuff\miktex\tex\generic\babel\babel.def) Package babel /b/c12/cWarning:/b/c0/c No hyphenation patterns were loaded for (babel) the language `Magyar' (babel) I will use the patterns loaded for \language=0 instead. message. Using latex it works fine: (C:\stuff\miktex\tex\latex\00miktex\bblopts.cfg) (C:\stuff\miktex\tex\generic\babel\magyar.ldf (C:\stuff\miktex\tex\generic\babel\babel.def))) I tried updating the FNDB and the Formats, but to no avail. Thanks.

    Read the article

  • Using clang to analyze C++ code

    - by aneccodeal
    We want to do some fairly simple analysis of user's C++ code and then use that information to instrument their code (basically regen their code with a bit of instrumentation code) so that the user can run a dynamic analysis of their code and get stats on things like ranges of values of certain numeric types. clang should be able to handle enough C++ now to handle the kind of code our users would be throwing at it - and since clang's C++ coverage is continuously improving by the time we're done it'll be even better. So how does one go about using clang like this as a standalone parser? We're thinking we could just generate an AST and then walk it looking for objects of the classes we're interested in tracking. Would be interested in hearing from others who are using clang without LLVM.

    Read the article

  • Is there a log file analyzer for log4j files?

    - by Juha Syrjälä
    I am looking for some kind of analyzer tool for log files generated by log4j files. I am looking something more advanced than grep? What are you using for log file analysis? I am looking for following kinds of features: The tool should tell me how many time a given log statement or a stack trace has occurred, preferably with support for some kinds of patterns (eg. number of log statements matching 'User [a-z]* logged in'). Breakdowns by log level (how many INFO, DEBUG lines) and by class that initiated the log message would be nice. Breakdown by date (how many log statements in given time period) What log lines occur commonly together? Support for several files since I am using log rolling Hot spot analysis: find if there is a some time period when there is unusually high number of log statements Either command-line or GUI are fine Open Source is preferred but I am also interested in commercial offerings My log4j configuration uses org.apache.log4j.PatternLayout with pattern %d %p %c - %m%n but that could be adapted for analyzer tool.

    Read the article

  • build .pyc prob

    - by Apache
    hi experts, i build .py as follow python /root/pyinstaller-1.4/Makespec.py test.py then python /root/pyinstaller-1.4/Build.py test.spec this working fine then i test to build with my .pyc as follow python /root/pyinstaller-1.4/Makespec.py test.pyc then python /root/pyinstaller-1.4/Build.pyc test.spec but its generating error as follow checking Analysis building because inputs changed running Analysis outAnalysis0.toc Analyzing: /root/pyinstaller-1.4/support/_mountzlib.py Analyzing: /root/pyinstaller-1.4/support/useUnicode.py Analyzing: test.pyc Traceback (most recent call last): File "/root/pyinstaller-1.4/Build.py", line 1160, in main(args[0], configfilename=opts.configfile) File "/root/pyinstaller-1.4/Build.py", line 1148, in main build(specfile) File "/root/pyinstaller-1.4/Build.py", line 1111, in build execfile(spec) File "test.spec", line 3, in pathex=['/root/test']) File "/root/pyinstaller-1.4/Build.py", line 245, in init self.postinit() File "/root/pyinstaller-1.4/Build.py", line 196, in postinit self.assemble() File "/root/pyinstaller-1.4/Build.py", line 314, in assemble analyzer.analyze_script(script) File "/root/pyinstaller-1.4/mf.py", line 559, in analyze_script co = compile(string.replace(stuff, "\r\n", "\n"), fnm, 'exec') TypeError: compile() expected string without null bytes why this error occur, cannot we build using .pyc, or there is other way to build it,

    Read the article

  • <100% Test coverage - best practices in selecting test areas

    - by Paul Nathan
    Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths. It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'. However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks. For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.

    Read the article

  • How to implement a Digg-like algorithm?

    - by Niklas
    Hi, How to implement a website with a recommendation system similar to stackoverflow/digg/reddit? I.e., users submit content and the website needs to calculate some sort of "hotness" according to how popular the item is. The flow is as follows: Users submit content Other users view and vote on the content (assume 90% of the users only views content and 10% actively votes up or down on content) New content is continuously submitted How do I implement an algorithm that calculates the "hotness" of a submitted item, preferably in real-time? Are there any best-practices or design patterns? I would assume that the algorithm takes the following into consideration: When an item was submitted When each vote was cast When the item was viewed E.g. an item that gets a constant trickle of votes would stay somewhat "hot" constantly while an item that receives a burst of votes when it is first submitted will jump to the top of the "hotness"-list but then fall down as the votes stop coming in. (I am using a MySQL+PHP but I am interested in general design patterns).

    Read the article

  • Managed language for scientific computing software

    - by heisen
    Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • The best way to organize WPF projects

    - by Mike
    Hello everybody, I've started recently to develop a new software in WPF, and I still don't know which is the best way to organize the application, to be more productive with Visual Studio and Expression Blend. I noticed 2 annoying things I'd like to solve: I'm using Code Contracts in my projects, and when I run my project with Expression Blend, it launches the static analysis of the code. How can I stop that? Which configuration of the project does Blend use by default? I've tried to disable Code Contracts in a new configuration. It works in VS as the static analysis is not launched, but it has no effects in Blend. I've thinked about splitting the Windows Application in 2 parts: the first one containing the views of the WPF (app.exe) and the second one being the core of the project, with the logic code (app.core.dll), and I would just open the former project in Blend. Any thoughts about that? Thanks in advance Mike

    Read the article

  • Dynamic function docstring

    - by Tom Aldcroft
    I'd like to write a python function that has a dynamically created docstring. In essence for a function func() I want func.__doc__ to be a descriptor that calls a custom __get__ function create the docstring on request. Then help(func) should return the dynamically generated docstring. The context here is to write a python package wrapping a large number of command line tools in an existing analysis package. Each tool becomes a similarly named module function (created via function factory and inserted into the module namespace), with the function documentation and interface arguments dynamically generated via the analysis package.

    Read the article

  • regex , php, preg_match

    - by Michael
    I'm trying to extract the mileage value from different ebay pages but I'm stuck as there seem to be too many patterns because the pages are a bit different . Therefore I would like to know if you can help me with a better pattern . Some examples of items are the following : http://cgi.ebay.com/ebaymotors/1971-Chevy-C10-Shortbed-Truck-/250647101696?cmd=ViewItem&pt=US_Cars_Trucks&hash=item3a5bbb4100 http://cgi.ebay.com/ebaymotors/1987-HANDICAP-LEISURE-VAN-W-WHEEL-CHAIR-LIFT-/250647101712?cmd=ViewItem&pt=US_Cars_Trucks&hash=item3a5bbb4110 http://cgi.ebay.com/ebaymotors/ws/eBayISAPI.dll?ViewItemNext&item=250647101696 Please see the patterns at the following link (I still cannot figure it out how to escape the html here http://pastebin.com/zk4HAY3T However they are not enough many as it seems there are still new patters....

    Read the article

  • Design of Business Layer

    - by Adil Mughal
    Hi, We are currently revamping our architecture and design of application. We have just completed design of Data Access Layer which is generic in the sense that it works using XML and reflection to persist data. Any ways now we are in the phase of designing business layer. We have read some books related to Enterprise Architecture and Design so we have found that there are few patterns that can be applied on business layer. Table Pattern and Domain Model are example of such patterns. Also we have found Domain Driven Design as well. Earlier we decided to build Entities against table objects. But we found that there is difference in Entities and Value Objects when it comes to DDD. For those of you who have gone through such design. Please guide me related to pattern, practice and sample. Thank you in advance! Also please feel free to discuss if you didn't get any point of mine.

    Read the article

  • Pulling out two separate words from a string using reg expressions?

    - by Marvin
    I need to improve on a regular expression I'm using. Currently, here it is: ^[a-zA-Z\s/-]+ I'm using it to pull out medication names from a variety of formulation strings, for example: SULFAMETHOXAZOLE-TRIMETHOPRIM 200-40 MG/5ML PO SUSP AMOX TR/POTASSIUM CLAVULANATE 125 mg-31.25 mg ORAL TABLET, CHEWABLE AMOXICILLIN TRIHYDRATE 125 mg ORAL TABLET, CHEWABLE AMOX TR/POTASSIUM CLAVULANATE 125 mg-31.25 mg ORAL TABLET, CHEWABLE Amoxicillin 1000 MG / Clavulanate 62.5 MG Extended Release Tablet The resulting matches on these examples are: SULFAMETHOXAZOLE-TRIMETHOPRIM AMOX TR/POTASSIUM CLAVULANATE AMOXICILLIN TRIHYDRATE AMOX TR/POTASSIUM CLAVULANATE Amoxicillin The first four are what I want, but on the fifth, I really need "Amoxicillin / Clavulanate". How would I pull out patterns like "Amoxicillin / Clavulanate" (in fifth row) while missing patterns like "MG/5 ML" (in the first row)?

    Read the article

  • Simple database design and LINQ

    - by Anders Svensson
    I have very little experience designing databases, and now I want to create a very simple database that does the same thing I have previously had in xml. Here's the xml: <services> <service type="writing"> <small>125</small> <medium>100</medium> <large>60</large> <xlarge>30</xlarge> </service> <service type="analysis"> <small>56</small> <medium>104</medium> <large>200</large> <xlarge>250</xlarge> </service> </services> Now, I wanted to create the same thing in a SQL database, and started doing this ( hope this formats ok, but you'll get the gist, four columns and two rows): > ServiceType Small Medium Large > > Writing 125 100 60 > > Analysis 56 104 200 This didn't work too well, since I then wanted to use LINQ to select, say, the Large value for Writing (60). But I couldn't use LINQ for this (as far as I know) and use a variable for the size (see parameters in the method below). I could only do that if I had a column like "Size" where Small, Medium, and Large would be the values. But that doesn't feel right either, because then I would get several rows with ServiceType = Writing (3 in this case, one for each size), and the same for Analysis. And if I were to add more servicetypes I would have to do the same. Simply repetitive... Is there any smart way to do this using relationships or something? Using the second design above (although not good), I could use the following LINQ to select a value with parameters sent to the method: protected int GetHourRateDB(string serviceType, Size size) { CalculatorLinqDataContext context = new CalculatorLinqDataContext(); var data = (from calculatorData in context.CalculatorDatas where calculatorData.Service == serviceType && calculatorData.Size == size.ToString() select calculatorData).Single(); return data.Hours; } But if there is another better design, could you please also describe how to do the same selection using LINQ with that design? Please keep in mind that I am a rookie at database design, so please be as explicit and pedagogical as possible :-) Thanks! Anders

    Read the article

  • Find ASCII "arrows" in text

    - by ulver
    I'm trying to find all the occurrences of "Arrows" in text, so in "<----=====><==->>" the arrows are: "<----", "=====>", "<==", "->", ">" This works: String[] patterns = {"<=*", "<-*", "=*>", "-*>"}; for (String p : patterns) { Matcher A = Pattern.compile(p).matcher(s); while (A.find()) { System.out.println(A.group()); } } but this doesn't: String p = "<=*|<-*|=*>|-*>"; Matcher A = Pattern.compile(p).matcher(s); while (A.find()) { System.out.println(A.group()); } No idea why. It often reports "<" instead of "<====" or similar. What is wrong?

    Read the article

  • How do you measure latency in low-latency environments?

    - by Ajaxx
    Here's the setup... Your system is receiving a stream of data that contains discrete messages (usually between 32-128 bytes per message). As part of your processing pipeline, each message passes through two physically separate applications which exchange the data using a low-latency approach (such as messaging over UDP) or RDMA and finally to a client via the same mechanism. Assuming you can inject yourself at any level, including wire protocol analysis, what tools and/or techniques would you use to measure the latency of your system. As part of this, I'm assuming that every message that is delivered to the system results in a corresponding (though not equivalent) message being pushed through the system and delivered to the client. The only tool that I've seen on the market like this is TS-Associates TipOff. I'm sure that with the right access you could probably measure the same information using a wire analysis tool (ala wireshark) and the right dissectors, but is this the right approach or are there any commodity solutions that I can use?

    Read the article

  • Finding C++ static initialization order problems

    - by Fred Larson
    We've run into some problems with the static initialization order fiasco, and I'm looking for ways to comb through a whole lot of code to find possible occurrences. Any suggestions on how to do this efficiently? Edit: I'm getting some good answers on how to SOLVE the static initialization order problem, but that's not really my question. I'd like to know how to FIND objects that are subject to this problem. Evan's answer seems to be the best so far in this regard; I don't think we can use valgrind, but we may have memory analysis tools that could perform a similar function. That would catch problems only where the initialization order is wrong for a given build, and the order can change with each build. Perhaps there's a static analysis tool that would catch this. Our platform is IBM XLC/C++ compiler running on AIX.

    Read the article

  • How to perform ant path mapping in war task?

    - by eljenso
    I have several JAR file pattern sets, like <patternset id="common.jars"> <include name="external/castor-1.1.jar" /> <include name="external/commons-logging-1.2.6.jar" /> <include name="external/itext-2.0.4.jar" /> ... </patternset> I also have a war task containing a lib element: ... Like this however, I end up with a WEB-INF/lib containing the subdirectories from my patterns: WEB-INF/lib/external/castor-1.1.jar WEB-INF/lib/external/... Is there any way to flatten this, so the JAR files appear top-level under WEB-INF/lib, regardless of the directories specified in the patterns? I looked at mapper but it seems you cannot use them inside lib.

    Read the article

  • Can I ask Postgresql to ignore errors within a transaction

    - by fmark
    I use Postgresql with the PostGIS extensions for ad-hoc spatial analysis. I generally construct and issue SQL queries by hand from within psql. I always wrap an analysis session within a transaction, so if I issue a destructive query I can roll it back. However, when I issue a query that contains an error, it cancels the transaction. Any further queries elicit the following warning: ERROR: current transaction is aborted, commands ignored until end of transaction block Is there a way I can turn this behaviour off? It is tiresome to rollback the transaction and rerun previous queries every time I make a typo.

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • django generic view not recieving an object (template issue?)

    - by Kirby
    My Model class Player(models.Model): player_name = models.CharField(max_length=50) player_email = models.CharField(max_length=50) def __unicode__(self): return self.player_name My Root urls.py urlpatterns = patterns('', (r'^kroster/', include('djangosite.kroster.urls')), (r'^admin/(.*)', admin.site.root), ) My kroster urls.py from djangosite.kroster.models import Player info_dict = { 'queryset': Player.objects.all(), } urlpatterns = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), ) My player_list.html template <h1>Player List</h1> {% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %} <ul> {% for player in object.player_set.all %} <li id="{{ player.id }}">{{ forloop.counter }} .)&nbsp;&nbsp;{{ player }}</li> {% endfor %} </ul> Sadly my template output is this. <h1>Player List</h1> <ul> </ul> Apologies if this is a stupid mistake. It has to be something wrong w/ my template.

    Read the article

  • GIS: When and why to use ArcObjects over GDAL programming to work with ArcGIS rasters and vectors?

    - by anotherobject
    Im just starting off with GDAL + python to support operations that cannot be done with ArcGIS python geoprocessing scripting. Mainly I am doing spatial modeling/analysis/editing of raster and vector data. I am a bit confused when ArcObject development is required versus when GDAL can be used? Is there functionality of ArcObjects that GDAL does not do? Is the opposite true too? I am assuming that ArcObjects are more useful in developing online tools versus Desktop analysis and modeling where the difference is more to do with preference? In my case i prefer GDAL because of python support, which I believe ArcObjects lack. thanks!

    Read the article

  • Data Mining open source tools

    - by Andriyev
    Hi I'm due to take up a project which is into data mining. Before I jump in I wanted to probe around for different data mining tools (preferably open source) which allows web based reporting. In my scenario the all the data would be provided to me, so I'm not supposed to crawl for it. In n nutshell, am looking for a tool which does - Data Analysis, Web based Reporting, provides some kind of a dashboard and mining features. I have worked on the Microsoft Analysis Services and BOXI and off late I have been looking at Pentaho, which seems to be a good option. Please share your experiences on any such tool which you know of. cheers

    Read the article

  • Django "Page not found" error page shows only one of two expected urls

    - by Frank V
    I'm working with Django, admittedly for the first time doing anything real. The URL config looks like the following: urlpatterns = patterns('my_site.core_prototype.views', (r'^newpost/$', 'newPost'), (r'^$', 'NewPostAndDisplayList'), # capture nothing... #more here... - perhaps the perma-links? ) This is in an app's url.py which is loaded from the project's url.py via: urlpatterns = patterns('', # only app for now. (r'^$', include('my_site.core_prototype.urls')), ) The problem is, when I receive a 404 attempting to utilize newpost, the error page only shows the ^$ -- it seems to ignore the newpost pattern... I'm sure the solution is probably stupid-simple but right now I'm missing it. Can someone help get me on the right track...

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >