Search Results

Search found 896 results on 36 pages for 'analyze'.

Page 27/36 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • 3x3 Sobel operator and gradient features

    - by pithyless
    Reading a paper, I'm having difficulty understanding the algorithm described: Given a black and white digital image of a handwriting sample, cut out a single character to analyze. Since this can be any size, the algorithm needs to take this into account (if it will be easier, we can assume the size is 2^n x 2^m). Now, the description states given this image we will convert it to a 512-bit feature (a 512-bit hash) as follows: (192 bits) computes the gradient of the image by convolving it with a 3x3 Sobel operator. The direction of the gradient at every edge is quantized to 12 directions. (192 bits) The structural feature generator takes the gradient map and looks in a neighborhood for certain combinations of gradient values. (used to compute 8 distinct features that represent lines and corners in the image) (128 bits) Concavity generator uses an 8-point star operator to find coarse concavities in 4 directions, holes, and lagrge-scale strokes. The image feature maps are normalized with a 4x4 grid. I'm for now struggling with how to take an arbitrary image, split into 16 sections, and using a 3x3 Sobel operator to come up with 12 bits for each section. (But if you have some insight into the other parts, feel free to comment :)

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Troubleshooting a COM+ application deadlock

    - by Chris Karcher
    I'm trying to troubleshoot a COM+ application that deadlocks intermittently. The last time it locked up, I was able to take a usermode dump of the dllhost process and analyze it using WinDbg. After inspecting all the threads and locks, it all boils down to a critical section owned by this thread: ChildEBP RetAddr Args to Child 0deefd00 7c822114 77e6bb08 000004d4 00000000 ntdll!KiFastSystemCallRet 0deefd04 77e6bb08 000004d4 00000000 0deefd48 ntdll!ZwWaitForSingleObject+0xc 0deefd74 77e6ba72 000004d4 00002710 00000000 kernel32!WaitForSingleObjectEx+0xac 0deefd88 75bb22b9 000004d4 00002710 00000000 kernel32!WaitForSingleObject+0x12 0deeffb8 77e660b9 000a5cc0 00000000 00000000 comsvcs!PingThread+0xf6 0deeffec 00000000 75bb21f1 000a5cc0 00000000 kernel32!BaseThreadStart+0x34 The object it's waiting on is an event: 0:016> !handle 4d4 f Handle 000004d4 Type Event Attributes 0 GrantedAccess 0x1f0003: Delete,ReadControl,WriteDac,WriteOwner,Synch QueryState,ModifyState HandleCount 2 PointerCount 4 Name <none> No object specific information available As far as I can tell, the event never gets signaled, causing the thread to hang and hold up several other threads in the process. Does anyone have any suggestions for next steps in figuring out what's going on? Now, seeing as the method is called PingThread, is it possible that it's trying to ping another thread in the process that's already deadlocked? UPDATE This actually turned out to be a bug in the Oracle 10.2.0.1 client. Although, I'm still interested in ideas on how I could have figured this out without finding the bug in Oracle's bug database.

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • Modified jQuery innerfade sluggish

    - by Jay Hankins
    HI. I am using the jQuery innerfade plugin to scroll through some images on my site. Innerfade is sluggish to move on Firefox 3.6.3, and IE 8. IE 8 is much worse than Firefox, and Chrome runs smoothly. Can you analyze my code to see what the problem is? I've used the modified innerfade from here: http://www.stylephp.com/2009/01/17/customizing-jquery-innerfade-plug-in-adding-controls-navigation-and-caption/ My sites are here: Without bg image: http://dl.dropbox.com/u/145908/doozie2/index.html With bg image: http://dl.dropbox.com/u/145908/doozie2/index2.html Removing my background image fixes the situation; it's not that big of a file. I don't understand why that is an issue. I am using a technique to resize the image to fit the browser window, as you can see in the CSS. Thanks so much. P.S. Sorry, I'm a new user so I can't post more than one link. Please copy and paste the address between <.

    Read the article

  • How do I connect to a Java command-line tool with the YourKit Java Profiler?

    - by Daryl Spitzer
    I've build a command-line tool in Java, which I would now like to profile with YourKit. I launch the command-line tool with something like: $ java -classpath .:foo.bar.jar com.foobar.tools.TheTool arg1 arg2 arg3 It runs to completion in less than 2 seconds. After reading http://www.yourkit.com/docs/80/help/agent.jsp, I tried the following: $ java -agentpath:/home/dspitzer/yjp-8.0.24/bin/linux-x86-32/libyjpagent.so -classpath .:foo.bar.jar com.foobar.tools.TheTool arg1 arg2 arg3 ...and I get: [YourKit Java Profiler 8.0.24] JVMTI version 3001016d; 14.3-b01; Sun Microsystems Inc.; mixed mode, sharing; Linux; 32-bit JVM [YourKit Java Profiler 8.0.24] Profiler agent is listening on port 10001... [YourKit Java Profiler 8.0.24] *** HINT ***: To get profiling results, connect to the application from the profiler UI ... But I guess YourKit is designed to only connect to running application. How should I modify my command-line tool to allow connection from YourKit? I could add a command-line option that will have it pause for input, and I won't press return for it to continue until I've connected to it from YourKit. Is there a YourKit API that I could add to my tool that would cause it to block until I've connected with YourKit? Is there a YourKit API or a java command-line option that would create a profiling "snapshot" that I could load and analyze later (after the command-line tool has completed) with YourKit?

    Read the article

  • In a digital photo, how can I detect if a mountain is obscured by clouds?

    - by Gavin Brock
    The problem I have a collection of digital photos of a mountain in Japan. However the mountain is often obscured by clouds or fog. What techniques can I use to detect that the mountain is visible in the image? I am currently using Perl with the Imager module, but open to alternatives. All the images are taken from the exact same position - these are some samples. My naïve solution I started by taking several horizontal pixel samples of the mountain cone and comparing the brightness values to other samples from the sky. This worked well for differentiating good image 1 and bad image 2. However in the autumn it snowed and the mountain became brighter than the sky, like image 3, and my simple brightness test started to fail. Image 4 is an example of an edge case. I would classify this as a good image since some of the mountain is clearly visible. UPDATE 1 Thank you for the suggestions - I am happy you all vastly over-estimated my competence. Based on the answers, I have started trying the ImageMagick edge-detect transform, which gives me a much simpler image to analyze. convert sample.jpg -edge 1 edge.jpg I assume I should use some kind of masking to get rid of the trees and most of the clouds. Once I have the masked image, what is the best way to compare the similarity to a 'good' image? I guess the "compare" command suited for this job? How do I get a numeric 'similarity' value from this?

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Keyboard hook return different symbols from card reader depends whther my app in focus or not

    - by user363868
    I code WinForm application where one of the input is magnetic stripe card reader (CR). I am using code George Mamaladze's article Processing Global Mouse and Keyboard Hooks in C# on codeproject.com to listen keyboard (USB card reader acts same way as keyboard) and I have weird situation. One card reader CR1 (Unitech MS240-2UG) produces keystroke which I intercept on KeyPress event analyze that I intercept certain patter like %ABCD-6EFJHI? and trigger some logic. Analysis required because user can type something else into application or in another application meanwhile my app is open When I use another card reader CR2 (IdTech IDBM-334133) keystroke intercepted by hook started from number 5 instead of % (It is actually same key on keyboard). Since it is starting sentinel it is very important for me to have ability recognize input from card reader. Moreover if my app running in background and I have focus on Notepad when I swipe card string %ABCD-6EFJHI? appears in Notepad and same way, with proper starting character) intercepted by keyboard hook. If swiped when focus on Form it is 5ABCD-6EFJHI? User who tried app with another card reader has same result as me with CR2. Only CR1 works for me as expected I was looking into Device manager of Windows and both devices use same HID driver supplied by MS. I checked devices though respective software from CR makers and starting and ending sentinels set to % and ? respective on both. I would appreciate and ideas and thoughts as I hit the wall myself Thank you

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • MD5 Hashing Given a Key in C#

    - by Jared
    I've been looking for a way to hash a given string in C# that uses a predetermined key. On my adventures through the internet trying to find an example i have seen lots of MD5CryptoServiceProvider examples which seem to use a default key for the machine, but none of them that apply a specific key. I need to have a specific key to encode data as to synchronize it to someone else's server. I hand them a hashed string and an ID number and they use that analyze the data and return a similar set to me. So is there anyway to get md5 to hash via a specific key that would be consistent to both. I would prefer this to be done in C#, but if its not possible with the libraries can you do so with some web languages like php or asp? Edit: Misunderstood the scenario I was thrown into and after a little sitting and thinking about why they would have me use a key it appears they want a key appended to the end of the string and hashed. That way the server can appended the key it has along with the data passed to ensure its a valid accessing computer. Anyways... thanks all ^_^ Edit2: As my comment below says, it was the term 'salting' I was oblivious to. Oh the joys of getting thrown into something new with no directions.

    Read the article

  • Using stdint.h and ANSI printf?

    - by nn
    Hi, I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying. I will be using some C99 functionality, but trying to conform to ANSI C. Currently I have the following in my bignum library: #include <stdint.h> #if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64) typedef uint64_t u_w; typedef uint32_t u_hw; #define BIGNUM_DIGITS 2048 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT32_MAX #define U_HW_MIN UINT32_MIN #define U_W_MAX UINT64_MAX #define U_W_MIN UINT64_MIN #else typedef uint32_t u_w; typedef uint16_t u_hw; #define BIGNUM_DIGITS 4096 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT16_MAX #define U_HW_MIN UINT16_MIN #define U_W_MAX UINT32_MAX #define U_W_MIN UINT32_MIN #endif typedef struct bn { int sign; int n_digits; // #digits should exclude carry (digits = limbs) int carry; u_hw tab[BIGNUM_DIGITS]; } bn; As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal. The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.

    Read the article

  • JQuery datepicker not working

    - by IniTech
    I'm completely new to JQuery and MVC. I'm working on a pet project that uses both to learn them and I've hit my first snag. I have a date field and I want to add the JQuery datepicker to the UI. Here is what I have done: Added <script src="../../Scripts/jquery-1.3.2.min.js" type="text/javascript"></script> to the site.master Inside my Create.aspx (View), I have <asp:Content ID="Create" ContentPlaceHolderID="MainContent" runat="server"> <h2> Create a Task</h2> <% Html.RenderPartial("TaskForm"); %> </asp:Content> and inside "TaskForm" (a user control) I have: <label for="dDueDate"> Due Date</label> <%= Html.TextBox("dDueDate",(Model.Task.TaskID > 0 ? string.Format("{0:g}",Model.Task.DueDate) : DateTime.Today.ToString("MM/dd/yyyy"))) %> <script type="text/javascript"> $(document).ready(function() { $("#dDueDate").datepicker(); }); </script> As you can see, the above checks to see if a task has an id 0 (we're not creating a new one) if it does, it uses the date on the task, if not it defaults to today. I would expect the datepicker UI element to show up, but instead I get: "Microsoft JScript runtime error: Object doesn't support this property or method" on the $("#dDueDate").datepicker(); Ideas? It is probably a very simple mistake, so don't over-analyze. As I said, this is the first time I've dealt with MVC or JQuery so I'm lost as to where to start.

    Read the article

  • Disassembling with python - no easy solution?

    - by Abc4599
    Hi, I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code. I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it. I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable. The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows. At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not. This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs. I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments. The CTypes sort of ultimate-death-to-your-python crashes. So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm. If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :) Thanks ahead.

    Read the article

  • Perf4J Logging Config Help

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • Jmap can't connect to to make a dump

    - by Jasper Floor
    We have an open beta of an app which occasionally causes the heapspace to overflow. The JVM reacts by going on a permanent vacation. To analyze this I would like to peek into the memory at the point where it failed. Java does not want me to do this. The process is still in memory but it doesn't seem to be recognized as a java process. The server in question is a debian Lenny server, Java 6u14 /opt/jdk/bin# ./jmap -F -dump:format=b,file=/tmp/apidump.hprof 11175 Attaching to process ID 11175, please wait... sun.jvm.hotspot.debugger.NoSuchSymbolException: Could not find symbol "gHotSpotVMTypeEntryTypeNameOffset" in any of the known library names (libjvm.so, libjvm_g.so, gamma_g) at sun.jvm.hotspot.HotSpotTypeDataBase.lookupInProcess(HotSpotTypeDataBase.java:390) at sun.jvm.hotspot.HotSpotTypeDataBase.getLongValueFromProcess(HotSpotTypeDataBase.java:371) at sun.jvm.hotspot.HotSpotTypeDataBase.readVMTypes(HotSpotTypeDataBase.java:102) at sun.jvm.hotspot.HotSpotTypeDataBase.<init>(HotSpotTypeDataBase.java:85) at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:568) at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494) at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:332) at sun.jvm.hotspot.tools.Tool.start(Tool.java:163) at sun.jvm.hotspot.tools.HeapDumper.main(HeapDumper.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.tools.jmap.JMap.runTool(JMap.java:179) at sun.tools.jmap.JMap.main(JMap.java:110) Debugger attached successfully. sun.jvm.hotspot.tools.HeapDumper requires a java VM process/core!

    Read the article

  • array of structs in C

    - by Hristo
    I'm trying to create an array of structs and also a pointer to that array. I don't know how large the array is going to be, so it should be dynamic. My struct would look something like this: typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t; ... and I need to have multiple of these structs for each file (unknown amount) that I will analyze. I'm not sure how to do this. I don't like the following approach because of the limit of the size of the array: # define MAX 10 typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t[MAX]; So how would I create this array? Also, would a pointer to this array would look something this? stats_t stats[]; stats_t *statsPtr = &stats[0]; Thanks, Hristo

    Read the article

  • org.hibernate.hql.ast.QuerySyntaxException: TABLE NAME is not mapped

    - by Coronatus
    I have two models, Item and ShopSection. They have a many-to-many relationship. @Entity(name = "item") public class Item extends Model { @ManyToMany(cascade = CascadeType.PERSIST) public Set<ShopSection> sections; } @Entity(name = "shop_section") public class ShopSection extends Model { public List<Item> findActiveItems(int page, int length) { return Item.find("select distinct i from Item i join i.sections as s where s.id = ?", id).fetch(page, length); } } findActiveItems is meant to find items in a section, but I get this error: org.hibernate.hql.ast.QuerySyntaxException: Item is not mapped [select distinct i from Item i join i.sections as s where s.id = ?] at org.hibernate.hql.ast.util.SessionFactoryHelper.requireClassPersister(SessionFactoryHelper.java:180) at org.hibernate.hql.ast.tree.FromElementFactory.addFromElement(FromElementFactory.java:111) at org.hibernate.hql.ast.tree.FromClause.addFromElement(FromClause.java:93) at org.hibernate.hql.ast.HqlSqlWalker.createFromElement(HqlSqlWalker.java:322) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElement(HqlSqlBaseWalker.java:3441) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElementList(HqlSqlBaseWalker.java:3325) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromClause(HqlSqlBaseWalker.java:733) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:584) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770) at org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:272) ... 8 more What am I doing wrong?

    Read the article

  • Convert rank-per-candidate format to OpenSTV BLT format

    - by kibibu
    I recently gathered, using a questionnaire, a set of opinions on the importance of various software components. Figuring that some form of Condorcet voting method would be the best way to obtain an overall rank, I opted to use OpenSTV to analyze it. My data is in tabular format, space delimited, and looks more or less like: A B C D E F G # Candidates 5 2 4 3 7 6 1 # First ballot. G is ranked first, and E is ranked 7th 4 2 6 5 1 7 3 # Second ballot etc In this format, the number indicates the rank and the sequence order indicates the candidate. Each "candidate" has a rank (required) from 1 to 7, where a 1 means most important and a 7 means least important. No duplicates are allowed. This format struck me as the most natural way to represent the output, being a direct representation of the ballot format. The OpenSTV/BLT format uses a different method of representing the same info, conceptually as follows: G B D C A F E # Again, G is ranked first and E is ranked 7th E B G A D C F # etc The actual numeric file format uses the (1-based) index of the candidate, rather than the label, and so is more like: 7 2 4 3 1 6 5 # Same ballots as before. 5 2 7 1 4 3 6 # A -> 1, G -> 7 In this format, the number indicates the candidate, and the sequence order indicates the rank. The actual, real, BLT format also includes a leading weight and a following zero to indicate the end of each ballot, which I don't care too much about for this. My question is, what is the most elegant way to convert from the first format to the (numeric) second?

    Read the article

  • Perf4J Graph Output from Log File

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • How to organize live data integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >