Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 885/1024 | < Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >

  • How to get hibernate3-maven-plugin hbm2ddl to find JDBC driver?

    - by HDave
    I have a Java project I am building with Maven. I am now trying to get the hibernate3-maven-plugin to run the hbm2ddl tool to generate a schema.sql file I can use to create the database schema from my annotated domain classes. This is a JPA application that uses Hibernate as the provider. In my persistence.xml file I call out the mysql driver: <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> When I run Maven, I see it processing all my classes, but when it goes to output the schema, I get the following error: ERROR org.hibernate.connection.DriverManagerConnectionProvider - JDBC Driver class not found: com.mysql.jdbc.Driver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver I have the MySQL driver as a dependency of this module. However it seems like the hbm2ddl tool cannot find it. I would have guessed that the Maven plugin would have known to search the local Maven file repository for this driver. What gives? The relevant part of my pom.xml is this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <persistenceunit>my-unit</persistenceunit> </componentProperties> </configuration> </plugin>

    Read the article

  • What to do when you need more verbs in REST

    - by Richard Levasseur
    There is another similar question to mine, but the discussion veered away from the problem I'm encounting. Say I have a system that deals with expense reports (ER). You can create and edit them, add attachments, and approve/reject them. An expense report might look like this: GET /er/1 => {"title": "Trip to NY", "totalcost": "400 USD", "comments": [ "john: Please add the total cost", "mike: done, can you approve it now?" ], "approvals": [ {"john": "Pending"}, {"finance-group": "Pending"}] } That looks fine, right? Thats what an expense report document looks like. If you want to update it, you can do this: POST /er/1 {"title": "Trip to NY 2010"} If you want to approve it, you can do this: POST /er/1/approval {"approved": true} But, what if you want to update the report and approve it at the same time? How do we do that? If you only wanted to approve, then doing a POST to something like /er/1/approval makes sense. We could put a flag in the URL, POST /er/1?approve=1, and send the data changes as the body, but that flag doesn't seem RESTful. We could put special field to be submitted, too, but that seems a bit hacky, too. If we did that, then why not send up data with attributes like set_title or add_to_cost? We could create a new resource for updating and approving, but (1) I can't think of how to name it without verbs, and (2) it doesn't seem right to name a resource based on what actions can be done to it (what happens if we add more actions?) We could have an X-Approve: True|False header, but headers seem like the wrong tool for the job. It'd also be difficult to get set headers without using javascript in a browser. We could use a custom media-type, application/approve+yes, but that seems no better than creating a new resource. We could create a temporary "batch operations" url, /er/1/batch/A. The client then sends multiple requests, perhaps POST /er/1/batch/A to update, then POST /er/1/batch/A/approval to approve, then POST /er/1/batch/A/status to end the batch. On the backend, the server queues up all the batch requests somewhere, then processes them in the same backend-transaction when it receives the "end batch processing" request. The downside with this is, obviously, that it introduces a lot of complexity. So, what is a good, general way to solve the problem of performing multiple actions in a single request? General because its easy to imagine additional actions that might be done in the same request: Suppress or send notifications (to email, chat, another system, whatever) Override some validation (maximum cost, names of dinner attendees) Trigger backend workflow that doesn't have a representation in the document.

    Read the article

  • How can I import one Gradle script into another?

    - by Ant
    Hi all, I have a complex gradle script that wraps up a load of functionality around building and deploying a number of netbeans projects to a number of environments. The script works very well, but in essence it is all configured through half a dozen maps holding project and environment information. I want to abstract the tasks away into another file, so that I can simply define my maps in a simple build file, and import the tasks from the other file. In this way, I can use the same core tasks for a number of projects and configure those projects with a simple set of maps. Can anyone tell me how I can import one gradle file into another, in a similar manner to Ant's task? I've trawled Gradle's docs to no avail so far. Additional Info After Tom's response below, I thought I'd try and clarify exactly what I mean. Basically I have a gradle script which runs a number of subprojects. However, the subprojects are all Netbeans projects, and come with their own ant build scripts, so I have tasks in gradle to call each of these. My problem is that I have some configuration at the top of the file, such as: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] I then generate tasks such as: projects.each({ task "checkout_$it.shortname" << { // Code to for example check module out from cvs using config from 'it'. } }) I have many of these sort of task generation snippets, and all of them are generic - they entirely depend on the config in the projects list. So what I want is a way to put this in a separate script and import it in the following sort of way: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] import("tasks.gradle") // This will import and run the script so that all tasks are generated for the projects given above. So in this example, tasks.gradle will have all the generic task generation code in, and will get run for the projects defined in the main build.gradle file. In this way, tasks.gradle is a file that can be used by all large projects that consist of a number of sub-projects with Netbeans ant build files.

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • How to design this ?

    - by Akku
    how can i make this entire process as 1 single event??? http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html and draw the chart on single click? I am new to servlets please guide me When a user clicks the "go " button with some input. The data goes to the servlet say "Test3". The servlet processes the data by the user and generates/feeds the data table dynamically Then I call the html page to draw the chart as shown in the tutorial link above. The problem is when I call the servlet it gives me a long json string in the browser as given in the tutorials "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'1333639331',table:{cols:[{............................" Then when i manually call the html page to draw the chart i am see the chart. But when I call html page directly using the request dispatcher via the servlet I dont get the result. This is my code and o/p...... I need sugession as to how should be my approach to call the chart public class Test3 extends HttpServlet implements DataTableGenerator { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { DataSourceHelper.executeDataSourceServletFlow(request, response, this , isRestrictedAccessMode() ); RequestDispatcher rd; rd = request.getRequestDispatcher("new.html");// it call's the html page which draws the chart as per the data added by the servlet..... rd.include(request, response);//forward(request, response); @Override public Capabilities getCapabilities() { return Capabilities.NONE; } protected boolean isRestrictedAccessMode() { return false; } @Override public DataTable generateDataTable(Query query, HttpServletRequest request) { // Create a data table. DataTable data = new DataTable(); ArrayList<ColumnDescription> cd = new ArrayList<ColumnDescription>(); cd.add(new ColumnDescription("name", ValueType.TEXT, "Animal name")); cd.add......... I get the following result along with unprocessed html page google.visualization.Query.setResponse({version:'0.6',statu..... <html> <head> <title>Getting Started Example</title> .... Entire html page as it is on the Browser. What I need is when a user clicks the go button the servlet should process the data and call the html page to draw the chart....Without the json string appearing on the browser.(all in one user click) What should be my approach or how should i design this.... there are no error in the code. since when i run the servlet i get the json string on the browser and then when i run the html page manually i get the chart drawn. So how can I do (servlet processing + html page drawing chart as final result) at one go without the long json string appearing on the browser. There is no problem with the html code....

    Read the article

  • Webdriver: Tests crash with internet explorer7 with error Modal dialog present

    - by user1207450
    Following tests is automated by using java and selenium-server-standalone-2.20.0.jar. The test crashes with the error: Page title is: cheese! - Google Search Starting browserTest 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - I/O exception (org.apache.http.NoHttpResponseException) caught when processing request: The target server failed to respond 2922 [main] INFO org.apache.http.impl.client.DefaultHttpClient - Retrying request Exception in thread "main" org.openqa.selenium.UnhandledAlertException: Modal dialog present (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 1.20 seconds Build info: version: '2.20.0', revision: '16008', time: '2012-02-27 19:03:04' System info: os.name: 'Windows XP', os.arch: 'x86', os.version: '5.1', java.version: '1.6.0_24' Driver info: driver.version: InternetExplorerDriver at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:170) at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:129) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:438) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:139) at org.openqa.selenium.ie.InternetExplorerDriver.setup(InternetExplorerDriver.java:91) at org.openqa.selenium.ie.InternetExplorerDriver.<init>(InternetExplorerDriver.java:48) at com.pwc.test.java.InternetExplorer7.browserTest(InternetExplorer7.java:34) at com.pwc.test.java.InternetExplorer7.main(InternetExplorer7.java:27) Test Class: package com.pwc.test.java; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebDriverBackedSelenium; import org.openqa.selenium.WebElement; import org.openqa.selenium.htmlunit.HtmlUnitDriver; import org.openqa.selenium.ie.InternetExplorerDriver; import com.thoughtworks.selenium.Selenium; public class InternetExplorer7 { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub WebDriver webDriver = new HtmlUnitDriver(); webDriver.get("http://www.google.com"); WebElement webElement = webDriver.findElement(By.name("q")); webElement.sendKeys("cheese!"); webElement.submit(); System.out.println("Page title is: "+webDriver.getTitle()); browserTest(); } public static void browserTest() { System.out.println("Starting browserTest"); String baseURL = "http://www.mail.yahoo.com"; WebDriver driver = new InternetExplorerDriver(); driver.get(baseURL); Selenium selenium = new WebDriverBackedSelenium(driver, baseURL); selenium.windowMaximize(); WebElement username = driver.findElement(By.id("username")); WebElement password = driver.findElement(By.id("passwd")); WebElement signInButton = driver.findElement(By.id(".save")); username.sendKeys("myusername"); password.sendKeys("magic"); signInButton.click(); driver.close(); } } I don't see any modal dialog when I launched the IE7/8 browser manually. What could be causing this?

    Read the article

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

  • Error while rendering .rdl file into pdf format

    - by Arka Chatterjee
    Hi, I an generating reports using SQL Server reporting services. I have generated a report and have put .rdl report file in the "E" drive. Now, when I am going to render the .rdl report file into pdf format,I am getting the exception : - "An error occurred during local report processing." The stack trace is follows : - " at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at SaltlakeSoft.APEX2.Controllers.TestPageController.RenderReport() in E:\Documents and Settings\Administrator\Desktop\afetbuild15thmayapex2\apex2\Controllers\TestPageController.cs:line 1626\r\n at lambda_method(ExecutionScope , ControllerBase , Object[] )\r\n at System.Web.Mvc.ActionMethodDispatcher.<c_DisplayClass1.b_0(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.<c_DisplayClassa.b_7()\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)" I am using the following code : - LocalReport report = new LocalReport(); report.ReportPath = @"E:\Report1.rdl"; List employeeCollection = empRepository.FindAll().ToList(); ReportDataSource reportDataSource = new ReportDataSource("dataSource1",employeeCollection); report.DataSources.Clear(); report.DataSources.Add(reportDataSource); report.Refresh(); string reportType = "PDF"; string mimeType; string encoding; string fileNameExtension; string deviceInfo ="" +"PDF" + "8.5in" + "11in" + "0.5in" +"1in" + "1in" +"0.5in" + ""; Warning[] warnings; string[] streams; byte[] renderedBytes; renderedBytes = report.Render(reportType,deviceInfo,out mimeType,out encoding, out fileNameExtension, out streams, out warnings); Response.Clear(); Response.ContentType = mimeType; Response.AddHeader("content-disposition", "attachment; filename=foo." + fileNameExtension); Response.BinaryWrite(renderedBytes); Response.End(); Please help me. Thanks in advance- Arka

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Adding custom filter in spring framework problem?

    - by user298768
    hello there iam trying to make a custom AuthenticationProcessingFilter to save some user data in the session after successful login here's my filter: Code: package projects.internal; import java.io.IOException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.security.Authentication; import org.springframework.security.ui.webapp.AuthenticationProcessingFilter; public class MyAuthenticationProcessingFilter extends AuthenticationProcessingFilter { protected void onSuccessfulAuthentication(HttpServletRequest request, HttpServletResponse response, Authentication authResult) throws IOException { super.onSuccessfulAuthentication(request, response, authResult); request.getSession().setAttribute("myValue", "My value is set"); } } and here's my security.xml file Code: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" auto-config="false" entry-point-ref="authenticationProcessingFilterEntryPoint"> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login.jsp" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login.jsp" authentication-failure-url="/p/login_error.jsp" /> <logout /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource"/> </authentication-provider> </authentication-manager> <beans:bean id="authenticationProcessingFilter" class="projects.internal.MyAuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> </beans:bean> <beans:bean id="authenticationProcessingFilterEntryPoint" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilterEntryPoint"> </beans:bean> </beans:beans> it gives an error here: Code: <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> multiple annotation found at this line:cvc-attribute.3 cvc-complex-type.4 cvc-enumeration-vaild what is the problem? thanks in advance

    Read the article

  • Android: dynamically setting links to text in strings.xml

    - by Martyn
    I'm trying to make an app with localisation built in, but I want a way that I can create a web link within the text, the URL being defined elsewhere (for ease of maintenance). So, I have my links in res/values/strings.xml: <?xml version="1.0" encoding="utf-8"?> <resources> ... <string name="link1">http://some.link.com</string> <string name="link2">http://some.link2.com</string> </resources> and my localised text in res/values-en-rGB/strings.xml <?xml version="1.0" encoding="utf-8"?> <resources> ... <string name="sampleText">Sample text\nMore text and link1\nMore text and link2.</string> </resources> I've not tested this bit, but from the localization section of developer.android.com it says that this approach to reducing content duplication should work, although I'm not sure what folder I should put Italian, for example. Would it be in 'res/values-it-rIT/strings.xml'? Lets assume that I have various other languages too. I'm looking for a way of taking the base localised 'sampleText' and inserting my html links in, and getting them to work when clicked on. I've tried two approaches so far: 1, Putting some formatting in the 'sampleText' (%s): <string name="sampleText">Sample text\nMore text and <a href="%s">link1</a>\nMore text and <a href="%s">link2</a>.</string> and then processing the text like this: TextView tv = (TextView) findViewById(R.id.textHolder); tv.setText(getResources().getString(R.string.sampleText, getResources().getString(R.string.link1), getResources().getString(R.string.link2))); But this didn't work when I click on the link, even though the link text is being put in to the correct places. 2, I tried to use Linkify but the regular expression route may be difficult as I'm looking at supporting non-Latin based languages. I tried to put a custom xml tag around the link text and then do something like this: Pattern wordMatcher = Pattern.compile("<span1>.*</span1>"); String viewURL = "content://" + getResources().getString(R.string.someLink); Linkify.addLinks(tv, wordMatcher , viewURL ); But this didn't work either. So, I'd like to know if there's a way of dynamically adding multiple URLs to different sections of the same text which will link to web content? Thank you, Martyn

    Read the article

  • Adding objects to an NSMutableArray, order seems odd when parsing from an XML file

    - by diatrevolo
    Hello: I am parsing an XML file for two elements: "title" and "noType". Once these are parsed, I am adding them to an object called aMaster, an instance of my own Master class that contains NSString variables. I am then adding these instances to an NSMutableArray on a singleton, in order to call them elsewhere in the program. The problem is that when I call them, they don't seem to be on the same NSMutableArray index... each index contains either the title OR the noType element, when it should be both... can anyone see what I may be doing wrong? Below is the code for the parser. Thanks so much!! #import "XMLParser.h" #import "Values.h" #import "Listing.h" #import "Master.h" @implementation XMLParser @synthesize sharedSingleton, aMaster; - (XMLParser *) initXMLParser { [super init]; sharedSingleton = [Values sharedValues]; aMaster = [[Master init] alloc]; return self; } - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { aMaster = [[Master alloc] init]; //Extract the attribute here. if ([elementName isEqualToString:@"intro"]) { aMaster.intro = [attributeDict objectForKey:@"enabled"]; } else if ([elementName isEqualToString:@"item"]) { aMaster.item_type = [attributeDict objectForKey:@"type"]; //NSLog(@"Did find item with type %@", [attributeDict objectForKey:@"type"]); //NSLog(@"Reading id value :%@", aMaster.item_type); } else { //NSLog(@"No known elements"); } //NSLog(@"Processing Element: %@", elementName); //HERE } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(!currentElementValue) currentElementValue = [[NSMutableString alloc] initWithString:string]; else { [currentElementValue appendString:string];//[tempString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; CFStringTrimWhitespace((CFMutableStringRef)currentElementValue); } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if ([elementName isEqualToString:@"item"]) { [sharedSingleton.master addObject:aMaster]; NSLog(@"Added %@ and %@ to the shared singleton", aMaster.title, aMaster.noType); //Only having one at a time added... don't know why [aMaster release]; aMaster = nil; } else if ([elementName isEqualToString:@"title"]) { [aMaster setValue:currentElementValue forKey:@"title"]; } else if ([elementName isEqualToString:@"noType"]) { [aMaster setValue:currentElementValue forKey:@"noType"]; //NSLog(@"%@ should load into the singleton", aMaster.noType); } NSLog(@"delimiter"); NSLog(@"%@ should load into the singleton", aMaster.title); NSLog(@"%@ should load into the singleton", aMaster.noType); [currentElementValue release]; currentElementValue = nil; } - (void) dealloc { [aMaster release]; [currentElementValue release]; [super dealloc]; } @end

    Read the article

  • What would I use to remove escaped html from large sets of data.

    - by Elizabeth Buckwalter
    Our database is filled with articles retrieved from RSS feeds. I was unsure of what data I would be getting, and how much filtering was already setup (WP-O-Matic Wordpress plugin using the SimplePie library). This plugin does some basic encoding before insertion using Wordpress's built in post insert function which also does some filtering. I've figured out most of the filters before insertion, but now I have whacko data that I need to remove. This is an example of whacko data that I have data in one field which the content I want in the front, but this part removed which is at the end: <img src="http://feeds.feedburner.com/~ff/SoundOnTheSound?i=xFxEpT2Add0:xFbIkwGc-fk:V_sGLiPBpWU" border="0"></img> <img src="http://feeds.feedburner.com/~ff/SoundOnTheSound?d=qj6IDK7rITs" border="0"></img> &lt;img src=&quot;http://feeds.feedburner.com/~ff/SoundOnTheSound?i=xFxEpT2Add0:xFbIkwGc-fk:D7DqB2pKExk&quot; Notice how some of the images are escape and some aren't. I believe this has to do with the last part being cut off so as to be unrecognizable as an html tag, which then caused it to be html endcoded. Another field has only this which is now filtered before insertion, but I have to get rid of the others: &lt;img src=&quot;http://farm3.static.flickr.com/2183/2289902369_1d95bcdb85.jpg&quot; alt=&quot;post_img&quot; width=&quot;80&quot; (all examples are on one line, but broken up for readability) Question: What is the best way to work with the above escaped html (or portion of an html tag)? I can do it in Perl, PHP, SQL, Ruby, and even Python. I believe Perl to be the best at text parsing, so that's why I used the Perl tag. And PHP times out on large database operations, so that's pretty much out unless I wanted to do batch processing and what not. PS One of the nice things about using Wordpress's insert post function, is that if you use php's strip_tags function to strip out all html, insert post function will insert <p> at the paragraph points. Let me know if there's anything more that I can answer. Some article that didn't quite answer my questions. (http://stackoverflow.com/questions/2016751/remove-text-from-within-a-database-text-field) (http://stackoverflow.com/questions/462831/regular-expression-to-escape-html-ampersands-while-respecting-cdata)

    Read the article

  • Advice for Architecture Design Logic for software application

    - by Prasad
    Hi, I have a framework of basic to complex set of objects/classes (C++) running into around 500. With some rules and regulations - all these objects can communicate with each other and hence can cover most of the common queries in the domain. My Dream: I want to provide these objects as icons/glyphs (as I learnt recently) on a workspace. All these objects can be dragged/dropped into the workspace. They have to communicate only through their methods(interface) and in addition to few iterative and conditional statements. All these objects are arranged finally to execute a protocol/workflow/dataflow/process. After drawing the flow, the user clicks the Execute/run button. All the user interaction should be multi-touch enabled. The best way to show my dream is : Jeff Han's Multitouch Video. consider Jeff is playing with my objects instead of the google maps. :-) it should be like playing a jigsaw puzzle. Objective: how can I achieve the following while working on this final product: a) the development should be flexible to enable provision for web services b) the development should enable easy web application development c) The development should enable client-server architecture - d) further it should also enable mouse based drag/drop desktop application like Adobe programs etc. I mean to say: I want to economize on investments. Now I list my efforts till now in design : a) Created an Editor (VB) where the user writes (manually) the object / class code b) On Run/Execute, the code is copied into a main() function and passed to interpreter. c) Catch the output and show it in the console. The interpreter can be separated to become a server and the Editor can become the client. This needs lot of standard client-server architecture work. But some how I am not comfortable in the tightness of this system. Without interpreter is there much faster and better embeddable solution to this? - other than writing a special compiler for these objects. Recently learned about AXIS-C++ can help me - looks like - a friend suggested. Is that the way to go ? Here are my questions: (pl. consider me a self taught programmer and NOT my domain) a) From the stage of C++ objects to multi-touch product, how can I make sure I will develop the parallel product/service models as well.? What should be architecture aspects I should consider ? b) What technologies are best suited for this? c) If I am thinking of moving to Cloud Computing, how difficult/ how redundant / how unnecessary my efforts will be ? d) How much time in months would it take to get the first beta ? I take the liberty to ask if any of the experts here are interested in this project, please email me: [email protected] Thank you for any help. Looking forward.

    Read the article

  • Rails: AJAX Controller JS not firing...

    - by neezer
    I'm having an issue with one of my controller's AJAX functionality. Here's what I have: class PhotosController < ApplicationController # ... def create @photo = Photo.new(params[:photo]) @photo.image_content_type = MIME::Types.type_for(@photo.image_file_name).to_s @photo.image_width = Paperclip::Geometry.from_file(params[:photo][:image]).width.to_i @photo.image_height = Paperclip::Geometry.from_file(params[:photo][:image]).height.to_i @photo.save! respond_to do |format| format.js end end # ... end This is called through a POST request sent by this code: $(function() { // add photos link $('a.add-photos-link').colorbox({ overlayClose: false, onComplete: function() { wire_add_photo_modal(); } }); function wire_add_photo_modal() { <% session_key = ActionController::Base.session_options[:key] %> $('#upload_photo').uploadify({ uploader: '/swf/uploadify.swf', script: '/photos', cancelImg: '/images/buttons/cancel.png', buttonText: 'Upload Photo(s)', auto: true, queueID: 'queue', fileDataName: 'photo[image]', scriptData: { '<%= session_key %>': '<%= u cookies[session_key] %>', commit: 'Adding Photo', controller: 'photos', action: 'create', '_method': 'post', 'photo[gallery_id]': $('#gallery_id').val(), 'photo[user_id]': $('#user_id').val(), authenticity_token: encodeURIComponent('<%= u form_authenticity_token if protect_against_forgery? %>') }, multi: true }); } }); Finally, I have my response code in app/views/photos/create.js.erb: alert('photo added!'); My log file shows that the request was successful (the photo was successfully uploaded), and it even says that it rendered the create action, yet I never get the alert. My browser shows NO javascript errors. Here's the log AFTER a request from the above POST request is submitted: Processing PhotosController#create (for 127.0.0.1 at 2010-03-16 14:35:33) [POST] Parameters: {"Filename"=>"tumblr_kx74k06IuI1qzt6cxo1_400.jpg", "photo"=>{"user_id"=>"1", "image"=>#<File:/tmp/RackMultipart20100316-54303-7r2npu-0>}, "commit"=>"Adding Photo", "_edited_session"=>"edited", "folder"=>"/kakagiloon/", "authenticity_token"=>"edited", "action"=>"create", "_method"=>"post", "Upload"=>"Submit Query", "controller"=>"photos"} [paperclip] Saving attachments. [paperclip] saving /public/images/assets/kakagiloon/thumbnail/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/profile/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/original/tumblr_kx74k06IuI1qzt6cxo1_400.jpg Rendering photos/create Completed in 248ms (View: 1, DB: 6) | 200 OK [http://edited.local/photos] NOTE: I edited out all the SQL statements and I put "edited" in place of sensitive info. What gives? Why aren't I getting my alert();? Please let me know if you need anymore info to help me solve this issue! Thanks.

    Read the article

  • Writing csv files with python with exact formatting parameters

    - by Ben Harrison
    I'm having trouble with processing some csv data files for a project. The project's programmer has moved onto greener pastures, and now I'm trying to finish the data analysis up (I did/do the statistical analysis.) The programmer suggested using python/csv reader to help break down the files, which I've had some success with, but not in a way I can use. This code is a little different from what I was trying before. I am essentially attempting to create an array. In the raw data format, the first 7 rows contain no data, and then each column contains 50 experiments, each with 4000 rows, for 200000 some rows total. What I want to do is take each column, and make it an individual csv file, with each experiment in its own column. So it would be an array of 50 columns and 4000 rows for each data type. The code here does break down the correct values, I think the logic is okay, but it is breaking down the opposite of how I want it. I want the separators without quotes (the commas and spaces) and I want the element values in quotes. Right now it is doing just the opposite for both, element values with no quotes, and the separators in quotes. I've spent several hours trying to figure out how to do this to no avail, import csv ifile = open('00_follow_maverick.csv') epistemicfile = open('00_follower_maverick_EP.csv', 'w') reader = csv.reader(ifile) colnum = 0 rownum = 0 y = 0 z = 8 for column in reader: rownum = 4000 * y + z for element in column: writer = csv.writer(epistemicfile) if y <= 50: y = y + 1 writer.writerow([element]) writer.writerow(',') rownum = x * y + z if y > 50: y = 0 z = z + 1 writer.writerow(' ') rownum = x * y + z if z >= 4008: break What is going on: I am taking each row in the raw data file in iterations of 4000, so that I can separate them with commas for the 50 experiments. When y, the experiment indicator here, reaches 50, it resets back to experiment 0, and adds 1 to z, which tells it which row to look at, by the formula of 4000 * y + z. When it completes the rows for all 50 experiments, it is finished. The problem here is that I don't know how to get python to write the actual values in quotes, and my separators outside of quotes. Any help will be most appreciated. Apologies if this seems a stupid question, I have no programming experience, this is my first attempt ever. Thank you.

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • .NET: Avoidance of custom exceptions by utilising existing types, but which?

    - by Mr. Disappointment
    Consider the following code (ASP.NET/C#): private void Application_Start(object sender, EventArgs e) { if (!SetupHelper.SetUp()) { throw new ShitHitFanException(); } } I've never been too hesitant to simply roll my own exception type, basically because I have found (bad practice, or not) that mostly a reasonable descriptive type name gives us enough as developers to go by in order to know what happened and why something might have happened. Sometimes the existing .NET exception types even accommodate these needs - regardless of the message. In this particular scenario, for demonstration purposes only, the application should die a horrible, disgraceful death should SetUp not complete properly (as dictated by its return value), but I can't find an already existing exception type in .NET which would seem to suffice; though, I'm sure one will be there and I simply don't know about it. Brad Abrams posted this article that lists some of the available exception types. I say some because the article is from 2005, and, although I try to keep up to date, it's a more than plausible assumption that more have been added to future framework versions that I am still unaware of. Of course, Visual Studio gives you a nicely formatted, scrollable list of exceptions via Intellisense - but even on analysing those, I find none which would seem to suffice for this situation... ApplicationException: ...when a non-fatal application error occurs The name seems reasonable, but the error is very definitely fatal - the app is dead. ExecutionEngineException: ...when there is an internal error in the execution engine of the CLR Again, sounds reasonable, superficially; but this has a very definite purpose and to help me out here certainly isn't it. HttpApplicationException: ...when there is an error processing an HTTP request Well, we're running an ASP.NET application! But we're also just pulling at straws here. InvalidOperationException: ...when a call is invalid for the current state of an instance This isn't right but I'm adding it to the list of 'possible should you put a gun to my head, yes'. OperationCanceledException: ...upon cancellation of an operation the thread was executing Maybe I wouldn't feel so bad using this one, but I'd still be hijacking the damn thing with little right. You might even ask why on earth I would want to raise an exception here but the idea is to find out that if I were to do so then do you know of an appropriate exception for such a scenario? And basically, to what extent can we piggy-back on .NET while keeping in line with rationality?

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • Google App Engine 1.3.1 <admin-console> Issue

    - by Taylor L
    I attempted to add an <admin-console> section to my appengine-web.xml and I got the exception below. The <admin-console> element is a valid element according to the appengine-web.xsd. It's also documented in the app engine docs. Any ideas as to what is wrong? <admin-console> <page name="My Admin" url="/app/admin" /> </admin-console> Feb 14, 2010 12:40:09 AM com.google.apphosting.utils.config.AppEngineWebXmlReader readAppEngineWebXml SEVERE: Received exception processing C:/development/taylor/myapp/target/myapp-web-0.0.1-SNAPSHOT\WEB-INF/appengine-web.xml com.google.apphosting.utils.config.AppEngineConfigException: Unrecognized element <admin-console> at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processSecondLevelNode(AppEngineWebXmlProcessor.java:99) at com.google.apphosting.utils.config.AppEngineWebXmlProcessor.processXml(AppEngineWebXmlProcessor.java:46) at com.google.apphosting.utils.config.AppEngineWebXmlReader.processXml(AppEngineWebXmlReader.java:94) at com.google.apphosting.utils.config.AppEngineWebXmlReader.readAppEngineWebXml(AppEngineWebXmlReader.java:61) at com.google.appengine.tools.admin.Application.<init>(Application.java:88) at com.google.appengine.tools.admin.Application.readApplication(Application.java:120) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:107) at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:58) at com.google.appengine.tools.admin.AppCfg.main(AppCfg.java:54) at net.kindleit.gae.EngineGoalBase.runAppCfg(EngineGoalBase.java:140) at net.kindleit.gae.DeployGoal.execute(DeployGoal.java:38) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:579) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:498) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmentForProject(DefaultLifecycleExecutor.java:265) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:191) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:149) at org.apache.maven.DefaultMaven.execute_aroundBody0(DefaultMaven.java:223) at org.apache.maven.DefaultMaven.execute_aroundBody1$advice(DefaultMaven.java:304) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:1) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody2(MavenEmbedder.java:904) at org.apache.maven.embedder.MavenEmbedder.execute_aroundBody3$advice(MavenEmbedder.java:304) at org.apache.maven.embedder.MavenEmbedder.execute(MavenEmbedder.java:1) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:176) at org.apache.maven.cli.MavenCli.main(MavenCli.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:408) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:351) at org.codehaus.classworlds.Launcher.main(Launcher.java:31)

    Read the article

  • Grouping geographical shapes

    - by grenade
    I am using Dundas Maps and attempting to draw a map of the world where countries are grouped into regions that are specific to a business implementation. I have shape data (points and segments) for each country in the world. I can combine countries into regions by adding all points and segments for countries within a region to a new region shape. foreach(var region in GetAllRegions()){ var regionShape = new Shape { Name = region.Name }; foreach(var country in GetCountriesInRegion(region.Id)){ var countryShape = GetCountryShape(country.Id); regionShape.AddSegments(countryShape.ShapeData.Points, countryShape.ShapeData.Segments); } map.Shapes.Add(regionShape); } The problem is that the country border lines still show up within a region and I want to remove them so that only regional borders show up. Dundas polygons must start and end at the same point. This is the case for all the country shapes. Now I need an algorithm that can: Determine where country borders intersect at a regional border, so that I can join the regional border segments. Determine which country borders are not regional borders so that I can discard them. Sort the resulting regional points so that they sequentialy describe the shape boundaries. Below is where I have gotten to so far with the map. You can see that the country borders still need to be removed. For example, the border between Mongolia and China should be discarded whereas the border between Mongolia and Russia should be retained. The reason I need to retain a regional border is that the region colors will be significant in conveying information but adjacent regions may be the same color. The regions can change to include or exclude countries and this is why the regional shaping must be dynamic. EDIT: I now know that I what I am looking for is a UNION of polygons. David Lean explains how to do it using the spatial functions in SQL Server 2008 which might be an option but my efforts have come to a halt because the resulting polygon union is so complex that SQL truncates it at 43,680 characters. I'm now trying to either find a workaround for that or find a way of doing the union in code.

    Read the article

< Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >