Search Results

Search found 2010 results on 81 pages for 'scan'.

Page 67/81 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to implement a collection (list, map?) of complicated strings in Java?

    - by Alex Cheng
    Hi all. I'm new here. Problem -- I have something like the following entries, 1000 of them: args1=msg args2=flow args3=content args4=depth args6=within ==> args5=content args1=msg args2=flow args3=content args4=depth args6=within args7=distance ==> args5=content args1=msg args2=flow args3=content args6=within ==> args5=content args1=msg args2=flow args3=content args6=within args7=distance ==> args5=content args1=msg args2=flow args3=flow ==> args4=flowbits args1=msg args2=flow args3=flow args5=content ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args5=content args1=msg args2=flow args4=depth ==> args3=content args1=msg args2=flow args4=depth args5=content ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within args7=distance ==> args3=content I'm doing some sort of suggestion method. Say, args1=msg args2=flow args3=flow == args4=flowbits If the sentence contains msg, flow, and another flow, then I should return the suggestion of flowbits. How can I go around doing it? I know I should scan (whenever a character is pressed on the textarea) a list or array for a match and return the result, but, 1000 entries, how should I implement it? I'm thinking of HashMap, but can I do something like this? <"msg,flow,flow","flowbits" Also, in a sentence the arguments might not be in order, so assuming that it's flow,flow,msg then I can't match anything in the HashMap as the key is "msg,flow,flow". What should I do in this case? Please help. Thanks a million!

    Read the article

  • How do I programmatically run all the JUnit tests in my Java application?

    - by Andrew McKinlay
    From Eclipse I can easily run all the JUnit tests in my application. I would like to be able to run the tests on target systems from the application jar, without Eclipse (or Ant or Maven or any other development tool). I can see how to run a specific test or suite from the command line. I could manually create a suite listing all the tests in my application, but that seems error prone - I'm sure at some point I'll create a test and forget to add it to the suite. The Eclipse JUnit plugin has a wizard to create a test suite, but for some reason it doesn't "see" my test classes. It may be looking for JUnit 3 tests, not JUnit 4 annotated tests. I could write a tool that would automatically create the suite by scanning the source files. Or I could write code so the application would scan it's own jar file for tests (either by naming convention or by looking for the @Test annotation). It seems like there should be an easier way. What am I missing?

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • Linq. Help me tune this!

    - by dtrick
    I have a linq query that is causing some timeout issues. Basically, I have a query that is returning the top 100 results from a table that has approximately 500,000 records. Here is the query: using (var dc = CreateContext()) { var accounts = string.IsNullOrEmpty(searchText) ? dc.Genealogy_Accounts .Where(a => a.Genealogy_AccountClass.Searchable) .OrderByDescending(a => a.ID) .Take(100) : dc.Genealogy_Accounts .Where(a => (a.Code.StartsWith(searchText) || a.Name.StartsWith(searchText)) && a.Genealogy_AccountClass.Searchable) .OrderBy(a => a.Code) .Take(100); return accounts.Select(a => } } Oddly enough it is the first linq query that is causing the timeout. I thought that by doing a 'Take' we wouldn't need to scan all 500k of records. However, that must be what is happening. I'm guessing that the join to find what is 'searchable' is causing the issue. I'm not able to denormalize the tables... so I'm wondering if there is a way to rewrite the linq query to get it to return quicker... or if I should just write this query as a Stored Procedure (and if so, what might it look like). Thanks.

    Read the article

  • What's the best approach for modifying PDF interactive form fields on iOS?

    - by gbreen
    I've been doing some head banging on this one and solicit your advice. I am building an app that as part of it's features is to present PDF forms; meaning display them, allow fields to be changed and save the modified PDF file back out. UIWebViews do not support PDF interactive forms. Using the CGPDF apis (and benefit from other questions posted here and elsewhere), I can certainly present the PDF (without the form fields/widgets), scan and find the fields in the document, figure out where on the screen to draw something and make them interactive. What I can't seem to figure out is how to change the CGPDFDictionary objects and write them back out to a file. One could use the CGPDF Apis to create a new PDF document from whole cloth, but how do you use it to modify an existing file? Should I be looking elsewhere such as 3rd party PDF libs like PoDoFo or libHaru? I'd love to hear from anyone who has successfully modified a PDF and written it back out as to your approach. Thanks!

    Read the article

  • How to wrap Ruby strings in HTML tags

    - by Jason H.
    Hi all: I'm looking for help on two things. 1) I'm looking for a way for Ruby to wrap strings in HTML. I have a program I'm writing that generates a Hash of word frequencies for a text file and I want to take the results and place it into an HTML file rather than print to STDOUT. I'm thinking each string needs to be wrapped in an HTML paragraph tag using readlines() or something, but I can't quite figure it out. Then, once I've wrapped the strings in HTML 2) I want to write to an empty HTML file. Right now my program looks like: filename = File.new(ARGV[0]).read().downcase().scan(/[\w']+/) frequency = Hash.new(0) words.each { |word| frequency[word] +=1 } frequency.sort_by { |x,y| y }.reverse().each{ |w,f| puts "#{f}, #{w}" } So if we ran a text file through this and received: 35, the 27, of 20, to 16, in # . . . I'd want to export to an HTML file that wraps the lines like: <p>35, the</p> <p>27, of</p> <p>20, to</p> <p>16, in</p> # . . . Thanks for any tips in advance!

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • How to avoid resetting the java Scanner position

    - by Derek
    I have some code that looks more or less like this: while(scanner.hasNext()) { if(scanner.findInLine("Test") !=null) { //do some things }else{ scanner.nextLine(); } } I am using this to parse an ~10MB text file. The problem is, if I put a breakpoint on the while() and the scanner.nextLine(), I can see that sometimes the scanners position (in the debug window) goes back to zero. I think this is causing me some kind of loop blow up, because the regext in findInLine() starts at zero, looks through some amount of text, advancing the position, and then it randomly gets set back to zero, so it has to re-parse all that text again. Any ideas what can be causing that? Am I even doing this the right way? Thanks Some additional info: The Scanner is instantiated from an InputStream. After diubg sine debugging, it appears that there is a HearCharBuffer that Scanner uses and it only allows 1024 characters at a time, and then resets. Is there a way to avoid this, or do things differently? That seems like a small amount of characters to be able to scan. Derek

    Read the article

  • Scanning, Checking, Converting, Copying values ... How to ? -- C --

    - by ZaZu
    Hi there, Its been a while now and im still trying to get a certain code to work. I asked some question about different commands etc. before, but now I hope this is the final one (combining all questions in one code). I basically want to : *Scan an input (should be character ? ) *Check if its a number *If not, return error *Convert that character into a float number *Copy the value to another variable ( I called it imp here) Here is what I came up with : #include<stdio.h> #include<stdlib.h> #include<ctype.h> main(){ int digits; float imp=0; int alpha; do{ printf("Enter input\n\n"); scanf("\n%c",&alpha); digits=isdigit(alpha); if(digits==0){ printf("error\n\n"); } imp=atof(alpha); }while(digits==0); } The problem is this code does not work at all ... It gives me that atof must be of a const char and whenever I try changing it around, it just keeps failing. I am frustrated and forced to ask here, because I believe I have tried alot and I keep failing, but I wont be relieved until I get it to work xD So I really need your help guys. Please tell me why isnt this code working, what am I doing wrong ? I am still learning C and really appreciate your help :)

    Read the article

  • How to store and locate multiple interface types within a Delphi TInterfaceList

    - by Brian Frost
    Hi, I'm storing small interfaces from a range of objects into a single TInterfaceList 'store' with the intention of offering list of specific interface types to the end user, so each interface will expose a 'GetName' function but all other methods are unique to that interface type. For example here are two interfaces: IBase = interface //---------------------------------------- function GetName : string; //---------------------------------------- end; IMeasureTemperature = interface(IBase) //------------------------------------ function MeasureTemperature : double; //---------------------------------------- end; IMeasureHumidity = interface(IBase) //---------------------------------------- function MeasureHumidity: double; //---------------------------------------- end; I put several of these interfaces into a single TInterfaceList and then I'd like to scan the list for a specific interface type (e.g. 'IMeasureTemperature') building another list of pointers to the objects exporting those interfaces. I wish to make no assumptions about the locations of those objects, some may export more than one type of interface. I know I could do this with a class hierarchy using something like: If FList[I] is TMeasureTemperature then .. but I'd like to do something simliar with an interface type, Is this possible?

    Read the article

  • implement AOP for Controllers in Spring 3

    - by tommy
    How do I implement AOP with an annotated Controller? I've search and found two previous posts regarding the problem, but can't seem to get the solutions to work. posted solution 1 posted solution 2 Here's what I have: Dispatch Servlet: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"> <context:annotation-config/> <context:component-scan base-package="com.foo.controller"/> <bean id="fooAspect" class="com.foo.aop.FooAspect" /> <aop:aspectj-autoproxy> <aop:include name="fooAspect" /> </aop:aspectj-autoproxy> </beans> Controller: @Controller public class FooController { @RequestMapping(value="/index.htm", method=RequestMethod.GET) public String showIndex(Model model){ return "index"; } } Aspect: @Aspect public class FooAspect { @Pointcut("@target(org.springframework.stereotype.Controller)") public void controllerPointcutter() {} @Pointcut("execution(* *(..))") public void methodPointcutter() {} @Before("controllerPointcutter()") public void beforeMethodInController(JoinPoint jp){ System.out.println("### before controller call..."); } @AfterReturning("controllerPointcutter() && methodPointcutter() ") public void afterMethodInController(JoinPoin jp) { System.out.println("### after returning..."); } @Before("methodPointcutter()") public void beforeAnyMethod(JoinPoint jp){ System.out.println("### before any call..."); } } The beforeAnyMethod() works for methods NOT in a controller; I cannot get anything to execute on calls to controllers. Am I missing something?

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • Easy ways to investigate unknown Python APIs

    - by jedi_coder
    When studying a snippet of unknown Python code, I occasionally bump into the varName.methodName() pattern. To figure out what's this, I shall study the code more, find where varName was instantiated, find its type. So if varName proves to be an instance of ClassName class, I would knew that methodName() is a method of ClassName. Sometimes varName == self and methodName() is a method of this class, or a method inherited from some other class, if the current class is subclassing some other classes. Are there quick ways / tools that could take 'methodName' as input, scan over all installed Python modules and show which classes have methodName()? The closest thing related to this I know of is ipython. If I type a class name, then dot ('.') then TAB, it can show the class members. Instead of a class I could use a name of an object (which is an instance of a certain class) and it would work too. As soon as I choose a method name from the provided options, I can type '?' or '??' and get some help if there's a docstring. I wonder if ipython can do some intelligent scanning based only on 'methodName' string. If you know alternatives to ipython that could possibly help with this, please do suggest them.

    Read the article

  • Context problem while loading Assemblies via Structuremap

    - by Zebi
    I want to load plugins in a smiliar way as shown here however the loaded assemblies seem not to share the same context. Trying to solve the problem I just build a tiny spike containing two assemblies. One console app and one library. The console app contains the IPlugin interface and has no references to the Plugin dll. I am scanning the plugin dir using a custom Registration convention: ObjectFactory.Initialize(x => x.Scan(s => { s.AssembliesFromPath(@"..\Plugin"); s.With(new PluginScanner()); })); public void Process(Type type, Registry registry) { if (!type.Name.StartsWith("P")) return; var instance = ObjectFactory.GetInstance(type); registry.For<IPlugin>().Add((IPlugin)instance); } Which thows an invalid cast exception saying he can not convert the plugin Type to IPlugin. public class P1 : IPlugin { public void Start() { Console.WriteLine("Hello from P1"); } } Further if I just construct the instance (which works fine by the way) and try to access ObjectFactory in the plugin ObjectFactory.WhatDoIHave() shows that they don't even share the same container instance. Experimenting around with MEF, Structuremap and loading the assembly manually whith Assembly.Load("Plugin") shows if loaded with Assembly.Load it works fine. Any ideas how I can fix this to work with StructureMaps assembly scanning?

    Read the article

  • PHP - file uploads and ways to prevent viruses from being uploaded in zip/rar archives

    - by Joe
    I am trying to provide a service on my website to allow users to upload files so others can download them. The issue is, since some of these files I will allow to upload will be .zip/.rar files, I am curious as to what ideas exist to help prevent the uploading of archives with Viruses/trojans etc. included. Some .zip files will include legitimate .exe files,though I am not sure what options I have. I thought about it and I don't have a method for verifying with a virus scanner on the server, since I am on shared hosting w/o the option to run a service like that... nor do I have the knowledge on how to do that. I am also aware there is no php class or database to scan the files for viruses. This means, my only options are to rely on: a). manual approval <-- not an acceptable option for me as it might become a busy site with thousands of uploads b). get the users to somehow point out it if has viruses through voting or "flagging", etc.... anyway, regarding "b" - what ideas would you suggest?

    Read the article

  • Why are argument substitutions not replaced during rescanning?

    - by James McNellis
    Consider the following macro definitions and invocation: #define x x[0] #define y(arg) arg y(x) This invocation expands to x[0] (tested on Visual C++ 2010, g++ 4.1, mcpp 2.7.2, and Wave). Why? Specifically, why does it not expand to x[0][0]? During macro replacement, A parameter in the replacement list...is replaced by the corresponding argument after all macros contained therein have been expanded. Before being substituted, each argument’s preprocessing tokens are completely macro replaced (C++03 §16.3.1/1). Evaluating the macro invocation, we take the following steps: The function-like macro y is invoked with x as the argument for its arg parameter The x in the argument is macro-replaced to become x[0] The arg in the replacement list is replaced by the macro-replaced value of the argument, x[0] The replacement list after substitution of all the parameters is x[0]. After all parameters in the replacement list have been substituted, the resulting preprocessing token sequence is rescanned...for more macro names to replace (C++03 §16.3.4/1). If the name of the macro being replaced is found during this scan of the replacement list...it is not replaced. Further, if any nested replacements encounter the name of the macro being replaced, it is not replaced (C++03 §16.3.4/2). The replacement list x[0] is rescanned (note that the name of the macro being replaced is y): x is identified as an object-like macro invocation x is replaced by x[0] Replacement stops at this point because of the rule in §16.3.4/2 preventing recursion. The replacement list after rescanning is x[0][0]. I have clearly misinterpreted something since all of the preprocessors I've tested say I am wrong. In addition, this example is a piece of a larger example in the C++0x FCD (at §16.3.5/5) and it too says that the expected replacement is x[0]. Why is x not replaced during rescanning? C99 and C++0x effectively have the same wording as C++03 in the quoted sections.

    Read the article

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • StructureMap Open Generic Types

    - by Shane Fulmer
    I have the following classes: public interface IRenderer<T> { string Render(T obj); } public class Generic<T> { } public class SampleGenericRenderer<T> : IRenderer<Generic<T>> { public string Render(Generic<T> obj) { throw new NotImplementedException(); } } I would like to be able to call StructureMap with ObjectFactory.GetInstance<IRenderer<Generic<string>>>(); and receive SampleGenericRenderer<string>. I'm currently using the following registration and receiving this error when I call GetInstance. "Unable to cast object of type 'ConsoleApplication1.SampleGenericRenderer'1[ConsoleApplication1.Generic'1[System.String]]' to type 'ConsoleApplication1.IRenderer'1[ConsoleApplication1.Generic'1[System.String]]'." public class CoreRegistry : Registry { public CoreRegistry() { Scan(assemblyScanner => { assemblyScanner.AssemblyContainingType(typeof(IRenderer<>)); assemblyScanner.AddAllTypesOf(typeof(IRenderer<>)); assemblyScanner.ConnectImplementationsToTypesClosing(typeof(IRenderer<>)); }); } } Is there any way to configure StructureMap so that it creates SampleGenericRenderer<string> instead of SampleGenericRenderer<Generic<string>>?

    Read the article

  • Help converting subquery to query with joins

    - by Tim
    I'm stuck on a query with a join. The client's site is running mysql4, so a subquery isn't an option. My attempts to rewrite using a join aren't going too well. I need to select all of the contractors listed in the contractors table who are not in the contractors2label table with a given label ID & county ID. Yet, they might be listed in contractors2label with other label and county IDs. Table: contractors cID (primary, autonumber) company (varchar) ...etc... Table: contractors2label cID labelID countyID psID This query with a subquery works: SELECT company, contractors.cID FROM contractors WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors.cID NOT IN ( SELECT contractors2label.cID FROM contractors2label WHERE labelID <> 1 AND countyID <> 1 ) I thought this query with a join would be the equivalent, but it returns no results. A manual scan of the data shows I should get 34 rows, which is what the subquery above returns. SELECT company, contractors.cID FROM contractors LEFT OUTER JOIN contractors2label ON contractors.cID = contractors2label.cID WHERE contractors.complete = 1 AND contractors.archived = 0 AND contractors2label.labelID <> 1 AND contractors2label.countyID <> 1 AND contractors2label.cID IS NULL

    Read the article

  • Is there any library/software available that can give me useful measurements of image quality?

    - by Thor84no
    I realise measuring image quality in software is going to be really difficult, and I'm not looking for a quick-fix. Googling this is largely showing up research papers and discussions that go a bit over my head, so I was wondering if anyone in the SO community had any experience with doing any rough image quality assessment? I want to use this to scan a few thousand images and whittle it down to a few dozen images that are most likely of poor quality. I could then show these to a user and leave the rest to them. Obviously there are many metrics that can be a part of whether an image is of high/low quality, I'd be happy with anything that could take an image as an input and give some reasonable metrics to any of the basic image quality metrics like sharpness, dynamic range, noise, etc., leaving it up to my software to determine what's acceptable and what isn't. Some of the images are poor quality because they've been up-scaled drastically. If there isn't a way of getting metrics like I suggested above, is there any way to detect that an image has been up-scaled like this?

    Read the article

  • !gcroot output leads nowhere

    - by Jeff Costa
    I am troubleshooting memory fragmentation in an app pool, as evidenced by a small number of Free objects consuming the most space on the heap: 0x000007ff00256728 6,543 3,890,208 System.Collections.Hashtable+bucket[] 0x000007ff002649a8 7,297 22,979,560 System.Byte[] 0x000007ff001e0d90 251,347 30,374,304 System.String 0x0000000001d0c830 373 48,036,816 Free Running the !dumpgen 3 command reveals the fragmentation; There is a repeating pattern of Free and System.Object objects of the same size: 000000017feb7350 24 **** FREE **** 000000017feb7368 8192 System.Object[] 000000017feb9368 24 **** FREE **** 000000017feb9380 8192 System.Object[] 000000017febb380 24 **** FREE **** 000000017febb398 8192 System.Object[] 000000017febd398 24 **** FREE **** 000000017febd3b0 8192 System.Object[] 000000017febf3b0 24 **** FREE **** 000000017febf3c8 8192 System.Object[] 000000017fec13c8 24 **** FREE **** 000000017fec13e0 8192 System.Object[] 000000017fec33e0 24 **** FREE **** 000000017fec33f8 8192 System.Object[] 000000017fec53f8 24 **** FREE **** 000000017fec5410 14024 System.Object[] 000000017fec8ad8 24 **** FREE **** 000000017fec8af0 8192 System.Object[] 000000017fecaaf0 24 **** FREE **** 000000017fecab08 8192 System.Object[] 000000017feccb08 24 **** FREE **** 000000017feccb20 8192 System.Object[] 000000017feceb20 24 **** FREE **** 000000017feceb38 8192 System.Object[] 000000017fed0b38 24 **** FREE **** 000000017fed0b50 8192 System.Object[] 000000017fed2b50 24 **** FREE **** 000000017fed2b68 8192 System.Object[] When I try to obtain the root of one of the System.Objects with !gcroot, I get a pinned handle, but no additional stack data: Scan Thread 41 OSThread 1044 DOMAIN(0000000001D51330):HANDLE(Pinned):15217e8:Root: 000000017fe60fe8(System.Object[]) As you can see, there is no additional data to go on. Running a !handle command also yields nothing: 0:041> !handle 000000017fe7a068 ff Handle 000000017fe7a068 Type <Error retrieving type> unable to query object information unable to query object information No object specific information available How can I trace out this memory leak when I cannot find what is rooting System.Object?

    Read the article

  • How can you determine the file size in JavaScript?

    - by Daniel Lew
    I help moderate a forum online, and on this forum we restrict the size of signatures. At the moment we test this via a simple Greasemonkey script I wrote; we wrap all signatures with a <div>, the script looks for them, and then measures the div's height and width. All the script does right now is make sure the signature resides in a particular height/width. I would like to start measuring the file size of the images inside of a signature automatically so that the script can automatically flag users who are including huge images in their signature. However, I can't seem to find a way to measure the size of images loaded on the page. I've searched and found a property special to IE (element.fileSize) but I obviously can't use that in my Greasemonkey script. Is there a way to find out the file size of an image in Firefox via JavaScript? Edit: People are misinterpreting the problem. The forums themselves do not host images; we host the BBCode that people enter as their signature. So, for example, people enter this: This is my signature, check out my [url=http://google.com]awesome website[/url]! This image is cool! [img]http://image.gif[/img] I want to be able to check on these images via Greasemonkey. I could write a batch script to scan all of these instead, but I'm just wondering if there's a way to augment my current script.

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >