Search Results

Search found 896 results on 36 pages for 'analyze'.

Page 20/36 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • ctypes and pointer manipulation

    - by Chris
    I am dealing with image buffers, and I want to be able to access data a few lines into my image for analysis with a c library. I have created my 8-bit pixel buffer in Python using create_string_buffer. Is there a way to get a pointer to a location within that buffer without re-creating a new buffer? My goal is to analyze and change data within that buffer in chunks, without having to do a lot of buffer creation and data copying. In this case, ultimately, the C library is doing all the manipulation of the buffer, so I don't actually have to change values within the buffer using Python. I just need to give my C function access to data within the buffer.

    Read the article

  • Why is FxCop warning about an overflow (CA2233) in this C# code?

    - by matt
    I have the following function to get an int from a high-byte and a low-byte: public static int FromBytes(byte high, byte low) { return high * (byte.MaxValue + 1) + low; } When I analyze the assembly with FxCop, I get the following critical warning: CA2233: OperationsShouldNotOverflow Arithmetic operations should not be done without first validating the operands to prevent overflow. I can't see how this could possibly overflow, so I am just assuming FxCop is being overzealous. Am I missing something? And what steps could be taken to correct what I have (or at least make the FxCop warning go away!)?

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • Interpretation of empty User-agent

    - by Amit Agrawal
    How should I interpret a empty User-agent? I have some custom analytics code and that code has to analyze only human traffic. I have got a working list of User-agents denoting human traffic, and bot traffic, but the empty User-agent is proving to be problematic. And I am getting lots of traffic with empty user agent - 10%. Additionally - I have crafted the human traffic versus bot traffic user agent list by analyzing my current logs. As such I might be missing a lot of entries in there. Is there a well maintained list of user agents denoting bot traffic, OR the inverse a list of user agents denoting human traffic?

    Read the article

  • .NET Debugging - System.Threading.ExecutionContext.runTryCode

    - by moogs
    We have this bug that only appears 30% of the time for the Release build. Opening the crash dump in WinDbg (snipped "!analyze -v" output): FAULTING_IP: +4 00000000`00000004 ?? ??? EXCEPTION_RECORD: ffffffffffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 0000000000000004 ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000008 Parameter[1]: 0000000000000004 Attempt to execute non-executable address 0000000000000004 ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. WRITE_ADDRESS: 0000000000000004 MANAGED_STACK: (TransitionMU) 0000000024B9E370 000007FEEDA1DD38 mscorlib_ni!System.Threading.ExecutionContext.runTryCode(System.Object)+0x178 (TransitionUM) (TransitionMU) 0000000024B9DFB0 000007FF00439010 MyLibrary!DocInfo.IsStatusOK()+0x30 Now, IsStatusOK() just calls PrintSystemJobInfo.Get(), but that doesn't seem to even appear in the stack. Any ideas on how to debug this? I'm sure runTryCode() is really not the problem...but..I'm stuck. Thanks! (I'm really groping here).

    Read the article

  • How do I find out what objects points to another object i Xcode Instruments

    - by Arlaharen
    I am trying to analyze some code of mine, looking for memory leaks. My problem is that some of my objects are leaking (at least as far as I can see), but the Leaks tool doesn't detect the leaks. My guess is that some iPhone OS object still holds pointers to my leaked objects. The objects I am talking about are subclasses of UIViewController that I use like this: MyController *controller = [[MyController alloc] initWithNibName:@"MyController" bundle:nil]; [self.navigationController pushViewController:controller animated:YES]; When these objects are no longer needed I do: [self.navigationController popViewControllerAnimated:YES]; Without a [controller release] call right now. Now when I look at what objects that gets created I see a lot of MyController instances that never gets destroyed. To me these are memory leaks, but to the Leaks tool they are not. Can someone here tell me if there is some way Instruments can tell me what objects are pointing to my MyController instances and thereby making them not count as memory leaks?

    Read the article

  • Looking for SQL Server Performance Monitor Tools

    - by the-locster
    I may be approaching this problem from the wrong angle but what I'm thinking of is some kind of performance monitor tool for SQl server that works in a similar way to code performance tools, e.g. I;d like to see an output of how many times each stored procedure was called, average executuion time and possibly various resource usage stats such as cache/index utilisation, resultign disk access and table scans, etc. As far as I can tell the performance monitor that comes with SQL Server just logs the various calls but doesn't report he variosu stats I'm looking for. Potentially I just need a tool to analyze the log output?

    Read the article

  • Screen scrape a web page that uses javaScript and frames

    - by Mello
    Hi, I want to scrape data from www.marktplaats.nl . I want to analyze the scraped description, price, date and views in Excel/Access. I tried to scrape data with Ruby (nokogiri, scrapi) but nothing worked. (on other sites it worked well) The main problem is that for example selectorgadget and the add-on firebug (Firefox) don’t find any css I can use to scrape the page. On other sites I can extract the css with selectorgadget or firebug and use it with nokogiri or scrapi. Due to lack of experience it is difficult to identify the problem and therefore searching for a solution isn’t easy. Can you tell me where to start solving this problem and where I maybe can find more info about a similar scraping process? Thanks in advance!

    Read the article

  • Creating a "less"-like console pager interface for pysqlite3 database

    - by Eric
    I would like to add some interactive capability to a python CLI application I've writen that stores data in a SQLite3 database. Currently, my app reads-in a certain type of file, parses and analyzes, puts the analysis data into the db, and spits the formatted records to stdout (which I generally pipe to a file). There are on-the-order-of a million records in this file. Ideally, I would like to eliminate that text file situation altogether and just loop after that "parse and analyze" part, displaying a screen's worth of records, and allowing the user to page through them and enter some commands that will edit the records. The backend part I know how to do. Can anyone suggest a good starting point for creating that pager frontend either directly in the console (like the pager "less"), through ncurses, or some other system?

    Read the article

  • WinDbg can't find mfc90 version 9.0.30729.4148 symbols on msdl.microsoft.com

    - by Simon Hofverberg
    I think the title states my problem fairly well. Where are those mfc symbols? Some background info: I have a crash dump that I want to analyze in WinDbg. My symbol path contains msdl.microsoft.com/download/symbols (and it says h t t p first but I'm a new user here so I'm not allowed to write that twice. The 9.0.30729.4148 version seems to be installed by some Windows update When a dump contains an earlier version of mfc90, it is located by WinDbg on the Microsoft symbol server with the same settings. When I use !sym noisy, the output for mfc90.dll contains: SYMSRV: http://msdl.microsoft.com/download/symbols/mfc90.dll/4A596D4939c000/mfc90.dll not found The same thing happens on two different computers Edit: See my comments below. The symbols are present on the server, but WinDbg can't get them.

    Read the article

  • bash: how to know NUM option in grep -A -B "on the fly" ?

    - by Michael Mao
    Hello everyone: I am trying to analyze my agent results from a collection of 20 txt files here. If you wonder about the background info, please go see my page, what I am doing here is just one step. Basically I would like to take only my agent's result out of the messy context, so I've got this command for a single file: cat run15.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' This means : after the regex match, continue forward by 50 lines, stop, then match a line separator starts with "==", go back by 50 lines, if possible (This would certainly clash the very first line). This approach depends on the fact that the hard-coded line number counter 50, would be just fine to get exactly one line separator. And this would not work if I do the following code: cat run*.txt | grep -A 50 -E '^Agent Name: agent10479475' | grep -B 50 '^==' The output would be a mess... My question is: how to make sure grep knows exactly when to stop going forward, and when to stop getting backward? Any suggestion or hint is much appreciated.

    Read the article

  • NDepend: How to not display 'tier' assemblies in dependency graph?

    - by Edward Buatois
    I was able to do this in an earlier version of nDepend by going to tools-options and setting which assemblies would be part of the analysis (and ignore the rest). The latest version of the trial version of nDepend lets me set it, but it seems to ignore the setting and always analyze all assemblies whether I want it to or not. I tried to delete the "tier" assemblies by moving them over to the "application assemblies" list, but when I delete them out of there, they just get added back to the "tier" list, which I can't ignore. I don't want my dependency graph to contain assemblies like "system," "system.xml," and "system.serialization!" I want only MY assemblies in the dependency graph! Or is that a paid-version feature now? Is there a way to do what I'm talking about?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Algorithm to determine coin combinations

    - by A.J.
    I was recently faced with a prompt for a programming algorithm that I had no idea what to do for. I've never really written an algorithm before, so I'm kind of a newb at this. The problem said to write a program to determine all of the possible coin combinations for a cashier to give back as change based on coin values and number of coins. For example, there could be a currency with 4 coins: a 2 cent, 6 cent, 10 cent and 15 cent coins. How many combinations of this that equal 50 cents are there? The language I'm using is C++, although that doesn't really matter too much. edit: This is a more specific programming question, but how would I analyze a string in C++ to get the coin values? They were given in a text document like 4 2 6 10 15 50 (where the numbers in this case correspond to the example I gave)

    Read the article

  • Open source / commercial alternative for jiffy.js?

    - by Marcel
    Hi, we'd like to measure "client side web site performance". i.e. we would like to have answers to the following questions: how long did it takt to load and render a webpage (including alle graphics scripts etc.) this bundled with additional informations like user-agent, operating system etc. a graphical tool to analyze the data would be perfect I know jiffy ( http://code.google.com/p/jiffy-web/wiki/Jiffy_js ) but that isn't maintained anymore. I would prefer either a hosted solution (like google analytics) or a java based solution that we deploy for ourselves. Do you know something like that? Thanks, Marc

    Read the article

  • Splitting string on probable English word boundaries

    - by Sean
    I recently used Adobe Acrobat Pro's OCR feature to process a Japanese kanji dictionary. The overall quality of the output is generally quite a bit better than I'd hoped, but word boundaries in the English portions of the text have often been lost. For example, here's one line from my file: softening;weakening(ofthemarket)8 CHANGE [transform] oneselfINTO,takethe form of; disguise oneself I could go around and insert the missing word boundaries everywhere, but this would be adding to what is already a substantial task. I'm hoping that there might exist software which can analyze text like this, where some of the words run together, and split the text on probable word boundaries. Is there such a package? I'm using Emacs, so it'd be extra-sweet if the package in question were already an Emacs package or could be readily integrated into Emacs, so that I could simply put my cursor on a line like the above and repeatedly invoke some command that splits the line on word boundaries in decreasing order of probable correctness.

    Read the article

  • Best way to auto-restore db on an houlry basis

    - by aron
    Hello, I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Rae Gate sql has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that red gate is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Best way to Fingerprint and Verify html structure.

    - by Lukas Šalkauskas
    Hello there, I just want to know what is your opinion about how to fingerprint/verify html/links structure. The problem I want to solve is: fingerprint for example 10 different sites, html pages. And after some time I want to have possibility to verify them, so is, if site has been changed, links changed, verification fails, othervise verification success. My base Idea is to analyze link structure by splitting it in some way, doing some kind of tree, and from that tree generate some kind of code. But I'm still in brainstorm stage, where I need to discuss this with someone, and know other ideas. So any ideas, algos, and suggestions would be usefull.

    Read the article

  • What language/API to use for a standalone live-input audio visualizer app?

    - by knuckfubuck
    I develop with Actionscript and was glad to see that AIR 2.0 was going to give access to mic input data. I planned to use this to create a visualizer set to the tempo of the incoming live audio. After doing a few days of google research it seems unlikely that it will be possible to analyze the data of the mic input in Flash/AIR. If anyone has ideas on how I can achieve this in AIR please let me know. (I'm open to workarounds.) That being said, I don't want to give up on the idea so I'm interested in suggestions for other language/API to use. My requirements for the app are: Run on OSX Two windows - one that can go fullscreen while the other(controller GUI) stays put Able to access live mic input data I've done reading on FFT and understand what needs to be done on the sound side so no need to help with that.

    Read the article

  • Detect a file is closed

    - by User1
    Is there a simple way to detect when a file for writing is closed? I have a process that kicks out a new file every 5 seconds. I can not change the code for that process. I am writing another process to analyze the contents of each file immediately when it is created. Is there a simple way to watch a certain directory to see when a file is no longer opened for writing? I'm hoping for some sort of Linux command.

    Read the article

  • NDepend: How to not display 'tier' assemblies in dependency graph?

    - by Edward Buatois
    I was able to do this in an earlier version of nDepend by going to tools-options and setting which assemblies would be part of the analysis (and ignore the rest). The latest version of the trial version of nDepend lets me set it, but it seems to ignore the setting and always analyze all assemblies whether I want it to or not. I tried to delete the "tier" assemblies by moving them over to the "application assemblies" list, but when I delete them out of there, they just get added back to the "tier" list, which I can't ignore. I don't want my dependency graph to contain assemblies like "system," "system.xml," and "system.serialization!" I want only MY assemblies in the dependency graph! Or is that a paid-version feature now? Is there a way to do what I'm talking about?

    Read the article

  • How to Compile Sample Code

    - by James L
    I'm breaking into GUI programming with android, trying to compile and analyze Lunar Lander sample program. The instructions for using Eclipse say to select "Create project from existing source" but that option doesn't exist. If I select File-New-Project I can select "Java project from Existing Ant Buildfile". Using that I've tried selecting various xml files as "Ant Buildfile" but all give me the "The file selected is not a valid Ant buildfile" error. I just want to run GUI sample projects, preferably with Eclipse. Any useful tips will be appreciated.

    Read the article

  • What's the best way to detect web applications attacks ?

    - by paulgreg
    What is the best way to survey and detect bad users behavior or attacks like deny of services or exploits on my web app ? I know server's statistics (like Awstats) are very useful for that kind of purpose, specially to see 3XX, 4XX and 5XX errors (here's an Awstats example page) which are often bots or bad intentioned users that try well-known bad or malformed URLs. Is there others (and betters) ways to analyze and detect that kind of attack tentative ? Note : I'm speaking about URL based attacks, not attacks on server's component (like database or TCP/IP).

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Programmatic Bot Detection

    - by matt
    Hi, I need to write some code to analyze whether or not a given user on our site is a bot. If it's a bot, we'll take some specific action. Looking at the User Agent is not something that is successful for anything but friendly bots, as you can specify any user agent you want in a bot. I'm after behaviors of unfriendly bots. Various ideas I've had so far are: If you don't have a browser ID If you don't have a session ID Unable to write a cookie Obviously, there are some cases where a legitimate user will look like a bot, but that's ok. Are there other programmatic ways to detect a bot, or either detect something that looks like a bot? thanks!

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >