Search Results

Search found 14000 results on 560 pages for 'include guards'.

Page 509/560 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • pathinfo vs fnmatch

    - by zaf
    There was a small debate regarding the speed of fnmatch over pathinfo here : http://stackoverflow.com/questions/2692536/how-to-check-if-file-is-php I wasn't totally convinced so decided to benchmark the two functions. Using dynamic and static paths showed that pathinfo was faster. Is my benchmarking logic and conclusion valid? I include a sample of the results which are in seconds for 100,000 iterations on my machine : dynamic path pathinfo 3.79311800003 fnmatch 5.10071492195 x1.34 static path pathinfo 1.03921294212 fnmatch 2.37709188461 x2.29 Code: <pre> <?php $iterations=100000; // Benchmark with dynamic file path print("dynamic path\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); } $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(fnmatch('*.php',$f)) $d=uniqid(); } $t3 = microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); // Benchmark with static file path print("static path\n"); $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; $i=$iterations; $t1=microtime(true); while($i-->0) if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0) if(fnmatch('*.php',$f)) $d=uniqid(); $t3=microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); ?> </pre>

    Read the article

  • VS 2008 irritating copy constructor link dependency

    - by Paul Hollingsworth
    Hi guys, I've run into the following annoying and seemingly incorrect behaviour in the Visual Studio 2008 C++ compiler: Suppose I have a class library - Car.lib - that uses a "Car" class, with a header called "Car.h": class Car { public: void Drive() { Accelerate(); } void Accelerate(); }; What I'm actually trying to do is use the Car headers (for some other functions), but without having to link with Car.lib itself (the actual class is not called "Car" but I am sanitising this example). If I #include "Car.h" in the .cpp file used to build a managed C++ .dll, but never refer to Car, everything compiles and links fine. This is because I never instantiate a Car object. However, the following: namespace { class Car { public: Car(const Car& rhs) { Accelerate(); } void Accelerate(); }; } leaves me with the link error: Error 2 error LNK2001: unresolved external symbol "public: void __thiscall `anonymous namespace'::Car::Accelerate(void)" (?Accelerate@Car@?A0xce3bb5ed@@$$FQAEXXZ) CREObjectWrapper.obj CREObjectBuilderWrapper Note I've declared the whole thing inside an anonymous namespace so there's no way that the Car functions could be exported from the .DLL in any case. Declaring the copy constructor out-of-line makes no difference. i.e. the following also fails to link: class Car { public: Car(const Car& rhs); void Accelerate(); }; Car::Car(const Car& rhs) { Accelerate(); } It's something specifically to do with the copy constructor note, because the following, for example, does link: class Car { public: Car() { Accelerate(); } void Accelerate(); }; I am not a C++ standards guru but this doesn't seem correct to me. Surely the compiler still should not have had to even generate any code that calls the Car copy constructor. Can anyone confirm if this behaviour is correct? It's been a while since I used C++ - but I don't think this used to be an issue with Visual Studio 6.0 for example. Can anyone suggest a workaround that allows one to "re-use" the Accelerate method from within the copy constructor and still have the copy constructor declared inline?

    Read the article

  • Wordpress: how to call a plugin function with an ajax call?

    - by Bee
    I'm writing a Wordpress MU plugin, it includes a link with each post and I want to use ajax to call one of the plugin functions when the user clicks on this link, and then dynamically update the link-text with output from that function. I'm stuck with the ajax query. I've got this complicated, clearly hack-ish, way to do it, but it is not quite working. What is the 'correct' or 'wordpress' way to include ajax functionality in a plugin? (My current hack code is below. When I click the generate link I don't get the same output I get in the wp page as when I go directly to sample-ajax.php in my browser.) I've got my code[1] set up as follows: mu-plugins/sample.php: <?php /* Plugin Name: Sample Plugin */ if (!class_exists("SamplePlugin")) { class SamplePlugin { function SamplePlugin() {} function addHeaderCode() { echo '<link type="text/css" rel="stylesheet" href="'.get_bloginfo('wpurl'). '/wp-content/mu-plugins/sample/sample.css" />\n'; wp_enqueue_script('sample-ajax', get_bloginfo('wpurl') . '/wp-content/mu-plugins/sample/sample-ajax.js.php', array('jquery'), '1.0'); } // adds the link to post content. function addLink($content = '') { $content .= "<span class='foobar clicked'><a href='#'>click</a></span>"; return $content; } function doAjax() { // echo "<a href='#'>AJAX!</a>"; } } } if (class_exists("SamplePlugin")) { $sample_plugin = new SamplePlugin(); } if (isset($sample_plugin)) { add_action('wp_head',array(&$sample_plugin,'addHeaderCode'),1); add_filter('the_content', array(&$sample_plugin, 'addLink')); } mu-plugins/sample/sample-ajax.js.php: <?php if (!function_exists('add_action')) { require_once("../../../wp-config.php"); } ?> jQuery(document).ready(function(){ jQuery(".foobar").bind("click", function() { var aref = this; jQuery(this).toggleClass('clicked'); jQuery.ajax({ url: "http://mysite/wp-content/mu-plugins/sample/sample-ajax.php", success: function(value) { jQuery(aref).html(value); } }); }); }); mu-plugins/sample/sample-ajax.php: <?php if (!function_exists('add_action')) { require_once("../../../wp-config.php"); } if (isset($sample_plugin)) { $sample_plugin->doAjax(); } else { echo "unset"; } ?> [1] Note: The following tutorial got me this far, but I'm stumped at this point. http://www.devlounge.net/articles/using-ajax-with-your-wordpress-plugin

    Read the article

  • get GET parameters in JSF's managed bean

    - by mykola
    Hello! Can someone tell me how to catch parameters passed from URI in JSF's managed bean? I have a navigation menu all nodes of which link to some navigation case. And i have two similar items there: Acquiring products and Issuing products. They have the same page but one different parameter: productType. I try to set it just by adding it to URL in "to-view-id" element like this: <navigation-case> <from-outcome>acquiring|products</from-outcome> <to-view-id>/pages/products/list_products.jspx?productType=acquiring</to-view-id> </navigation-case> <navigation-case> <from-outcome>issuing|products</from-outcome> <to-view-id>/pages/products/list_products.jspx?productType=issuing</to-view-id> </navigation-case> But i can't get this "productType" from my managed bean. I tried to get it through FacesContext like this: FacesContext.getCurrentInstance().getExternalContext().getRequestParameterMap().get("productType") And like this: HttpServletRequest request = (HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getRequest(); request.getParameter("productType"); And i tried to include it as a parameter of managed bean in faces-config.xml and then getting it through ordinary setter: <managed-bean> <managed-bean-name>MbProducts</managed-bean-name> <managed-bean-class>my.package.product.MbProducts</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> <managed-property> <property-name>productType</property-name> <value>#{param.productType}</value> </managed-property> </managed-bean> ... public class MbProducts { ... public void setProductType(String productType) { this.productType = productType; } ... } But neither of these ways have helped me. All of them returned null. How can i get this productType? Or how can i pass it some other way?

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • _heapwalk reports _HEAPBADNODE, causes breakpoint or loops endlessly

    - by Stefan Hubert
    I use _heapwalk to gather statistics about the Process' standard heap. Under certain circumstances i observe unexpected behaviours like: _HEAPBADNODE is returned some breakpoint is triggered inside _heapwalk, telling me the heap might got corrupted access violation inside _heapWalk. I saw different behaviours on different Computers. On one Windows XP 32 bit machine everything looked fine, whereas on two Windows XP 64 bit machines i saw the mentioned symptoms. I saw this behaviour only if LowFragmentationHeap was enabled. I played around a bit. I walked the heap several times right one after another inside my program. First time doing nothing in between the subsequent calls to _heapWalk (everything fine). Then again, this time doing some stuff (for gathering statistics) in between two subsequent calls to _heapWalk. Depending upon what I did there, I sometimes got the described symptoms. Here finally a question: What exactly is safe and what is not safe to do in between two subsequent calls to _heapWalk during a complete heap walk run? Naturally, i shall not manipulate the heap. Therefore i doublechecked that i don't call new and delete. However, my observation is that function calls with some parameter passing causes my heap walk run to fail already. I subsequently added function calls and increasing number of parameters passed to these. My feeling was two function calls with two paramters being passed did not work anymore. However I would like to know why. Any ideas why this does not happen on some machines? Any ideas why this only happens if LowFragmentationHeap is enabled? Sample Code finally: #include <malloc.h> void staticMethodB( int a, int b ) { } void staticMethodA( int a, int b, int c) { staticMethodB( 3, 6); return; } ... _HEAPINFO hinfo; hinfo._pentry = NULL; while( ( heapstatus = _heapwalk( &hinfo ) ) == _HEAPOK ) { //doing nothing here works fine //however if i call functions here with parameters, this causes //_HEAPBADNODE or something else staticMethodA( 3,4,5); } switch( heapstatus ) { ... case _HEAPBADNODE: assert( false ); /*ERROR - bad node in heap */ break; ...

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • Parsing XML with VB.net.

    - by SuperFurryToad
    I'm trying to make sense of a big data dump of XML that I need to write to a database using some VB.net code. I'm looking for some help getting started with the parsing code, specifically how to access the attribute values. <Product ID="523233" UserTypeID="Property" ParentID="523232"> <Name>My Property Name</Name> <AssetCrossReference AssetID="173501" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="554740" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="566495" Type=" Non Print old"> </AssetCrossReference> <AssetCrossReference AssetID="553014" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553015" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553016" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553017" Type="Non Print"> </AssetCrossReference> <AssetCrossReference AssetID="553018" Type="Non Print"> </AssetCrossReference> <Values> <Value AttributeID="5115">Section of main pool</Value> <Value AttributeID="5137">114 apartments, four floors, no lifts</Value> <Value AttributeID="5170">Property location</Value> <Value AttributeID="5164">2 key</Value> <Value AttributeID="5134">A comfortable property, the apartment is set on a pine-covered hillside - a scenic and peaceful location.</Value> <Value AttributeID="5200">YYY</Value> <Value AttributeID="5148">facilities include X,Y,Z</Value> <Value AttributeID="5067">Self Catering. </Value> <Value AttributeID="5221">Frequent organised daytime activities</Value> </Values> </Product> </Product> Thanks in advance for your help.

    Read the article

  • Rails ActiveResource Associations

    - by brad
    I have some ARes models (see below) that I'm trying to use associations with (which seems to be wholly undocumented and maybe not possible but I thought I'd give it a try) So on my service side, my ActiveRecord object will render something like render :xml => @group.to_xml(:include => :customers) (see generated xml below) The models Group and Customers are HABTM On my ARes side, I'm hoping that it can see the <customers> xml attribute and automatically populate the .customers attribute of that Group object , but the has_many etc methods aren't supported (at least as far as I can tell) So I'm wondering how ARes does it's reflection on the XML to set the attributes of an object. In AR for instance I could create a def customers=(customer_array) and set it myself, but this doesn't seem to work in ARes. One suggestion I found for an "association" is the just have a method def customers Customer.find(:all, :conditions => {:group_id => self.id}) end But this has the disadvantage that it makes a second service call to look up those customers... not cool I'd like my ActiveResource model to see that the customers attribute in the XML and automatically populate my model. Anyone have any experience with this?? # My Services class Customer < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :customer end # My ActiveResource accessors class Customer < ActiveResource::Base; end class Group < ActiveResource::Base; end # XML from /groups/:id?customers=true <group> <domain>some.domain.com</domain> <id type="integer">266</id> <name>Some Name</name> <customers type="array"> <customer> <active type="boolean">true</active> <id type="integer">1</id> <name>Some Name</name> </customer> <customer> <active type="boolean" nil="true"></active> <id type="integer">306</id> <name>Some Other Name</name> </customer> </customers> </group>

    Read the article

  • What's the correct way to use Cakephp urls?

    - by Pichan
    Hello all, it's my first post here :) I'm having some difficulties with dealing with urls and parameters. I've gone through the router class api documentation over and over again and found nothing useful. First of all, I'd like to know if there is any 'universal' format in CakePHP(1.3) for handling urls. I'm currently handling all my urls as simple arrays(in the format that Router::url and $html-link accepts) and it's easy as long as I only need to pass them as arguments to cake's own methods. It usually gets tricky if I need something else. Mainly I'm having problems with converting string urls to the basic array format. Let's say I want to convert $arrayUrl to string and than again into url: $arrayUrl=array('controller'=>'SomeController','action'=>'someAction','someValue'); $url=Router::url($arrayUrl); //$url is now '/path/to/site/someController/someAction/someValue' $url=Router::normalize($url); //remove '/path/to/site' $parsed=Router::parse($url); /*$parsed is now Array( [controller] => someController [action] => someAction [named] => Array() [pass] => Array([0] => someValue) [plugin] => ) */ That seems an awful lot of code to do something as simple as to convert between 2 core formats. Also, note that $parsed is still not in the same as $arrayUrl. Of course I could tweak $parsed manually and actually I've done that a few times as a quick patch but I'd like to get to the bottom of this. I also noticed that when using prefix routing, $this-params in controller has the prefix embedded in the action(i.e. [action] = 'admin_edit') and the result of Router::parse() does not. Both of course have the prefix in it's own key. To summarize, how do I convert an url between any of these 3(or 4, if you include the prefix thing) mentioned formats the right way? Of course it would be easy to hack my way through this, but I'd still like to believe that cake is being developed by a bunch of people who have a lot more experience and insight than me, so I'm guessing there's a good reason for this "perceived misbehavior". I've tried to present my problem as good as I can, but due to my rusty english skills, I had to take a few detours :) I'll explain more if needed.

    Read the article

  • How to merge jquery autocomplete and fcbkListSelection functionality?

    - by Ben
    Initial caveat: I am a newbie to jquery and javascript. I started using the autocomplete jquery plugin recently from bassistance.de. Yesterday, I found the fcbkListSelection plugin (http://www.emposha.com/javascript/fcbklistselection-like-facebook-friends-selector.html) and am using it to include a selector in my application. What I now want to do is to merge the 2 pieces of functionality since there are lots of items in my selector list. To be more specific, I want to put a "Filter by Name" text box above the facebook list selection object in my html. When the user types a few characters into the "Filter by Name" text box, I want the items in the Facebook list selector to only show the items that contain the typed characters -- kind of like what autocomplete does already, but rather than the results showing below the text input box, I want them to dynamically update the facebook list object. Is this possible and relatively straightforward? If so, can somebody please describe how to do it? If not, is there an easier way to approach this? Thanks! OK, to Spencer Ruport's comment, I guess I may have a more specific question already. Below is the current Javascript in my HTML file (angle brackets represent Django tags). #suggest1 and #fcbklist both do their separate pieces, but how do I get them to talk to each other? Do I need to further write javascript in my HTML file, or do I need to customize the guts of the plugins' .js? Does this help elaborate? $(document).ready(function() { var names = []; var count = 0; {% for a, b, c in friends_for_picker %} names[count] = "{{ b }}"; count = count+1; {% endfor %} function findValueCallback(event, data, formatted) { $("<li>").html( !data ? "No match!" : "Selected: " + formatted).appendTo("#result"); } function formatItem(row) { return row[0] + " (<strong>id: " + row[1] + "</strong>)"; } function formatResult(row) { return row[0].replace(/(<.+?>)/gi, ''); } $("#suggest1").autocomplete(names, { multiple: true, mustMatch: false, autoFill: true, matchContains: true, }); //id(ul id),width,height(element height),row(elements in row) $.fcbkListSelection("#fcbklist","575","50","4"); });

    Read the article

  • Correct way of using/testing event service in Eclipse E4 RCP

    - by Thorsten Beck
    Allow me to pose two coupled questions that might boil down to one about good application design ;-) What is the best practice for using event based communication in an e4 RCP application? How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ? Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events: @Inject static IEventBroker broker; private void sendEvent() { broker.post(MyEventConstants.SOME_EVENT, payload) } On the receiver side, I have a method like: @Inject @Optional private void receiveEvent(@UIEventTopic(MyEventConstants.SOME_EVENT) Object payload) Now the questions: In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context); This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application? how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker? This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach. This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved. All this makes me wonder whether I am still on a "good path" or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?

    Read the article

  • Is ResourceBundle fallback resolution broken in Resin3x?

    - by LES2
    Given the following ResourceBundle properties files: messages.properties messages_en.properties messages_es.properties messages_{some locale}.properties Note: messages.properties contains all the messages for the default locale. messages_en.properties is really empty - it's just there for correctness. messages_en.properties will fall back to messages.properties! And given the following config params in web.xml: <context-param> <param-name>javax.servlet.jsp.jstl.fmt.localizationContext</param-name> <param-value>messages</param-value> </context-param> <context-param> <param-name>javax.servlet.jsp.jstl.fmt.fallbackLocale</param-name> <param-value>en</param-value> </context-param> I would expect that if the chosen locale is 'es', and a resource is not translated in 'es', then it would fall back to 'en', and finally to 'messages.properties' (since messages_en.properties is empty). This is how things work in Jetty. I've also tested this on WebSphere. Resin Is the Problem The problem is when I get to Resin (3.0.23). Fallback resolution does not work at all! In order to get an messages to display, I must do the following: Rename messages.properties to messages_en.properties (essentially, swap the contents of messages.properties and messages_en.properties) Make sure ever key in messages_en.properties is also defined in messages_{every other locale}.properties (even if the exact same). If I don't do this, I get "???some.key???" in the JSPs. Please help! This is perplexing. -- LES SOLUTION Add following to pom.xml (if you're using maven) ... <properties> <taglibs.version>1.1.2</taglibs.version> </properties> ... <!-- Resin ships with a crappy JSTL implementation that doesn't work with fallback locales for resource bundles correctly; we therefore include our own JSTL implementation in the WAR, and avoid this problem. This can be removed if the target container is not resin. --> <dependency> <groupId>taglibs</groupId> <artifactId>standard</artifactId> <version>${taglibs.version}</version> <scope>compile</scope> </dependency>

    Read the article

  • How to design this ?

    - by Akku
    how can i make this entire process as 1 single event??? http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html and draw the chart on single click? I am new to servlets please guide me When a user clicks the "go " button with some input. The data goes to the servlet say "Test3". The servlet processes the data by the user and generates/feeds the data table dynamically Then I call the html page to draw the chart as shown in the tutorial link above. The problem is when I call the servlet it gives me a long json string in the browser as given in the tutorials "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'1333639331',table:{cols:[{............................" Then when i manually call the html page to draw the chart i am see the chart. But when I call html page directly using the request dispatcher via the servlet I dont get the result. This is my code and o/p...... I need sugession as to how should be my approach to call the chart public class Test3 extends HttpServlet implements DataTableGenerator { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { DataSourceHelper.executeDataSourceServletFlow(request, response, this , isRestrictedAccessMode() ); RequestDispatcher rd; rd = request.getRequestDispatcher("new.html");// it call's the html page which draws the chart as per the data added by the servlet..... rd.include(request, response);//forward(request, response); @Override public Capabilities getCapabilities() { return Capabilities.NONE; } protected boolean isRestrictedAccessMode() { return false; } @Override public DataTable generateDataTable(Query query, HttpServletRequest request) { // Create a data table. DataTable data = new DataTable(); ArrayList<ColumnDescription> cd = new ArrayList<ColumnDescription>(); cd.add(new ColumnDescription("name", ValueType.TEXT, "Animal name")); cd.add......... I get the following result along with unprocessed html page google.visualization.Query.setResponse({version:'0.6',statu..... <html> <head> <title>Getting Started Example</title> .... Entire html page as it is on the Browser. What I need is when a user clicks the go button the servlet should process the data and call the html page to draw the chart....Without the json string appearing on the browser.(all in one user click) What should be my approach or how should i design this.... there are no error in the code. since when i run the servlet i get the json string on the browser and then when i run the html page manually i get the chart drawn. So how can I do (servlet processing + html page drawing chart as final result) at one go without the long json string appearing on the browser. There is no problem with the html code....

    Read the article

  • Creating a simple templated control. Having issues...

    - by Jimock
    Hi, I'm trying to create a really simple templated control. I've never done it before, but I know a lot of my controls I have created in the past would have greatly benefited if I included templating ability - so I'm learning now. The problem I have is that my template is outputted on the page but my property value is not. So all I get is the static text which I include in my template. I must be doing something correctly because the control doesn't cause any errors, so it knows my public property exists. (e.g. if I try to use Container.ThisDoesntExist it throws an exception). I'd appreciate some help on this. I may be just being a complete muppet and missing something. Online tutorials on simple templated server controls seem few and far between, so if you know of one I'd like to know about it. A cut down version of my code is below. Many Thanks, James Here is my code for the control: [ParseChildren(true)] public class TemplatedControl : Control, INamingContainer { private TemplatedControlContainer theContainer; [TemplateContainer(typeof(TemplatedControlContainer)), PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ItemTemplate { get; set; } protected override void CreateChildControls() { Controls.Clear(); theContainer = new TemplatedControlContainer("Hello World"); this.ItemTemplate.InstantiateIn(theContainer); Controls.Add(theContainer); } } Here is my code for the container: [ToolboxItem(false)] public class TemplatedControlContainer : Control, INamingContainer { private string myString; public string MyString { get { return myString; } } internal TemplatedControlContainer(string mystr) { this.myString = mystr; } } Here is my mark up: <my:TemplatedControl runat="server"> <ItemTemplate> <div style="background-color: Black; color: White;"> Text Here: <%# Container.MyString %> </div> </ItemTemplate> </my:TemplatedControl>

    Read the article

  • pointers to functions

    - by DevAno1
    I have two basic Cpp tasks, but still I have problems with them. First is to write functions mul1,div1,sub1,sum1, taking ints as arguments and returning ints. Then I need to create pointers ptrFun1 and ptrFun2 to functions mul1 and sum1, and print results of using them. Problem starts with defining those pointers. I thought I was doing it right, but devcpp gives me errors in compilation. #include <iostream> using namespace std; int mul1(int a,int b) { return a * b; } int div1(int a,int b) { return a / b; } int sum1(int a,int b) { return a + b; } int sub1(int a,int b) { return a - b; } int main() { int a=1; int b=5; cout << mul1(a,b) << endl; cout << div1(a,b) << endl; cout << sum1(a,b) << endl; cout << sub1(a,b) << endl; int *funPtr1(int, int); int *funPtr2(int, int); funPtr1 = sum1; funPtr2 = mul1; cout << funPtr1(a,b) << endl; cout << funPtr2(a,b) << endl; system("PAUSE"); return 0; } 38 assignment of function int* funPtr1(int, int)' 38 cannot convertint ()(int, int)' to `int*()(int, int)' in assignment Task 2 is to create array of pointers to those functions named tabFunPtr. How to do that ?

    Read the article

  • Best Practice: Legitimate Cross-Site Scripting

    - by Ryan
    While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary. I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple barebones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong. Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought "iFrame" - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content, so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame, I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page. In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript. A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors. My solution was as follows: My script received a GET parameter ("page") and then looked for the file ({$page}.php), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters ("\n") followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template. While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain?

    Read the article

  • Including hibernate jar dependencies in ant build

    - by Patrick
    Hi, I'm trying to compile a runnable jar-file for a project that makes use of hibernate. I'm trying to construct an ant build.xml file to streamline my build process, but I'm having troubles with the inclusion of the hibernate3.jar inside the final jar-file. If I run the ant script I manage to include all my library jars, and they are put in the final jar-file's root. When I run the jar-file I get a java.lang.NoClassDefFoundError: org/hibernate/Session error. If I make use of the built-in export to jar in Eclipse, it works only if I choose "extract required libraries into jar". But that bloats the jar, and includes too much of my project (i.e. unit tests). Below is my generated manifest: Manifest-Version: 1.0 Main-Class: main.ServerImpl Class-Path: ./ antlr-2.7.6.jar commons-collections-3.1.jar dom4j-1.6.1.jar hibernate3.jar javassist-3.9.0.GA.jar jta-1.1.jar slf4j-api-1.5.11.jar slf4j-simple-1.5.11.jar mysql-connector-java-5.1.12-bin.jar rmiio-2.0.2.jar commons-logging-1.1.1.jar And the part of the build.xml looks like this: <target name="dist" depends="compile" description="Generates the Distribution Jar(s)"> <mkdir dir="${dist.dir}" /> <jar destfile="${dist.dir}/${dist.file.name}.jar" basedir="${build.prod.dir}" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="./ ${manifest.classpath} " /> <attribute name="Implementation-Title" value="${app.name}" /> <attribute name="Implementation-Version" value="${app.version}" /> <attribute name="Implementation-Vendor" value="${app.vendor}" /> </manifest> <zipfileset refid="hibernatefiles" /> <zipfileset refid="slf4jfiles" /> <zipfileset refid="mysqlfiles" /> <zipfileset refid="commonsloggingfiles" /> <zipfileset refid="rmiiofiles" /> </jar> </target> The refids' for the zipfilesets point to the directories in a library directory lib in the root of the project. The manifest.classpath-variable takes the classpath of all those library jar-files, and flattens them with pathconvert and mapper. I've also tried to set the manifest classpath to ".", "./" and only the library jar, but to no difference at all. I'm hoping there's a simple remedy to my problems...

    Read the article

  • right usage of std::uncaught_exception in a destructor

    - by Vokuhila-Oliba
    There are some articles concluding "never throw an exception from a destructor", and "std::uncaught_exception() is not useful", for example: http://www.gotw.ca/gotw/047.htm (written by Herb Sutter) But it seems that I am not getting the point. So I wrote a small testing example (see below). Since everything is fine with the testing example I would very appreciate some comments regarding what might be wrong with it. testing results: ./main Foo::~Foo(): caught exception - but have pending exception - ignoring int main(int, char**): caught exception: from int Foo::bar(int) ./main 1 Foo::~Foo(): caught exception - but *no* exception is pending - rethrowing int main(int, char**): caught exception: from Foo::~Foo() // file main.cpp // build with e.g. "make main" // tested successfully on Ubuntu-Karmic with g++ v4.4.1 #include <iostream> class Foo { public: int bar(int i) { if (0 == i) throw(std::string("from ") + __PRETTY_FUNCTION__); else return i+1; } ~Foo() { bool exc_pending=std::uncaught_exception(); try { bar(0); } catch (const std::string &e) { // ensure that no new exception has been created in the meantime if (std::uncaught_exception()) exc_pending = true; if (exc_pending) { std::cerr << __PRETTY_FUNCTION__ << ": caught exception - but have pending exception - ignoring" << std::endl; } else { std::cerr << __PRETTY_FUNCTION__ << ": caught exception - but *no* exception is pending - rethrowing" << std::endl; throw(std::string("from ") + __PRETTY_FUNCTION__); } } } }; int main(int argc, char** argv) { try { Foo f; // will throw an exception in Foo::bar() if no arguments given. Otherwise // an exception from Foo::~Foo() is thrown. f.bar(argc-1); } catch (const std::string &e) { std::cerr << __PRETTY_FUNCTION__ << ": caught exception: " << e << std::endl; } return 0; }

    Read the article

  • Is there any reasonable use of a function returning an anonymous struct?

    - by Akanksh
    Here is an (artificial) example of using a function that returns an anonymous struct and does "something" useful: #include <iostream> template<typename T> T* func( T* t, float a, float b ) { if(!t) { t = new T; t->a = a; t->b = b; } else { t->a += a; t->b += b; } return t; } struct { float a, b; }* foo(float a, float b) { if(a==0) return 0; return func(foo(a-1,b), a, b); } int main() { std::cout << foo(5,6)->a << std::endl; std::cout << foo(5,6)->b << std::endl; void* v = (void*)(foo(5,6)); float* f = (float*)(v); //[1] delete f now because I know struct is floats only. std::cout << f[0] << std::endl; std::cout << f[1] << std::endl; delete[] f; return 0; } There are a few points I would like to discuss: As is apparent, this code leaks, is there anyway I can NOT leak without knowing what the underlying struct definition is? see Comment [1]. I have to return a pointer to an anonymous struct so I can create an instance of the object within the templatized function func, can I do something similar without returning a pointer? I guess the most important, is there ANY (real-world) use for this at all? As the example given above leaks and is admittedly contrived. By the way, what the function foo(a,b) does is, to return a struct containing two numbers, the sum of all numbers from 1 to a and the product of a and b. EDIT: Maybe the line new T could use a boost::shared_ptr somehow to avoid leaks, but I haven't tried that. Would that work?

    Read the article

  • mysql_connect() causes page to not display (WAMP)

    - by VOIDHand
    I currently have a website running MySQL and PHP in which this is all working. I've tried installing WAMPServer to be able to work on the site on my own computer, but I have been having issues trying to get the site to work correctly. HTML and PHP files work correctly (by going to http://localhost/index.php, etc.). But some of the pages display "This Webpage is not available" (in Chrome) or "Internet Explorer cannot display this webpage", which leads me to think there is an error is the server, as when the page doesn't exist, it typically dispays "Oops! This link appears to be broken" or IE's standard 404 page. Each page in my site has an include() call to set up an instance of a class to handle all database transactions. This class opens the connection to the database in its constructor. I have found commenting out the contents of the constructor will allow the page to load, although this understandably causes errors later in the page. This is the contents of the included file: class dbAccess { private $db; function __construct() { $this->db = mysql_connect("externalURL","user","password") or die ("Connection for database not found."); mysql_select_db("dbName", $this->db) or die ("Database not selected"); } ... } As a note for what's happening here. I'm attempting to use the database on my site, rather than the local machine. The actual values of externalURL, dbName, user, and password work when I plug them into my database software and the file fine of the site, so I don't think the issue is with those values themselves, but rather some setting with Apache or Wamp. The issue persists if I try a local database as well. This is an excerpt of my Apache error log for one attempt at logging into the site: [Wed Apr 14 15:32:54 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Apr 14 15:32:54 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Wed Apr 14 15:32:54 2010] [notice] Server built: Dec 10 2008 00:10:06 [Wed Apr 14 15:32:54 2010] [notice] Parent: Created child process 1756 [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Child process is running [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Acquired the start mutex. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting 64 worker threads. [Wed Apr 14 15:32:55 2010] [notice] Child 1756: Starting thread to listen on port 80. For the first line, isLoggedIn (from the first line) is a method of the class above. Assuming that the connection works, this error should disappear. I've tried searching for solutions online, but haven't been able to find anything to help. If you need any further information to help solve this issue, feel free to ask for it. (One more note, I don't even have Skype on this computer, so I can't see it being an issue, as this conflict seems to be the default response for any Wamp issue.) [Edit: Removed entry from the error log as it was solved as an unrelated issue]

    Read the article

  • Why don't my scrollbars work properly when programmatically hiding rows in silverlight Datagrid?

    - by Luke Vilnis
    I have a Silverlight datagrid with custom code that allows for +/- buttons on the lefthand side and can display a table with a tree structure. The +/- buttons are bound to a IsExpanded property on my ViewModelRows, as I call them. The visibility of rows is bound to an IsVisible property on the ViewModelRows which is determined based on whether or not all of the parent rows are expanded. Straightforward enough. This code works fine in that if I scroll up and down the grid with PageUp/PageDown or the arrow keys, all the right rows are hidden and everything has the right structure and I can play with the +/- buttons to my hearts content. However, the vertical scroll bar on the right hand side, although it starts off the correct size and it scrolls through the rows smoothly, when I collapse rows and then re-expand them, doesn't go back to its correct size. The scrollbar can still usually be moved around to scroll through the whole collection, but because it is too big, once the bar moves to the bottom, there are still more rows to go through and it sort of jerkily shoots all the way down to the bottom or sometimes fails to scroll at all. This is pretty hard to describe so I included a screenshot with the black lines drawn on to show the difference in scrollbar length even though the two grids have the same number of rows expanded. I think this might be a bug related to the way the Datagrid does virtualization of rows. It seems to me like it isn't properly keeping track of how tall each row is supposed to be when expansion states change. Is there a way to programmatically "poke" (read hack) it to recalculate its scrollbar size on LoadingRow or something ugly like that? I'd include a code sample but there's 2 c# files and 1 xaml file so I wanted to see if anyone else has heard of this sort of issue before I try to make it reproducible in a self-contained way. Once again, scrolling with the arrow keys works fine so I'm pretty sure the underlying logic and binding is working, there's just some issue with the row height not being calculated properly. Since I'm a new user, it won't let me use image tags so here's the link to a picture of the problem: http://img210.imageshack.us/img210/8760/messedupscrollbars.png

    Read the article

  • 23warning: assignment makes pointer from integer without a cast

    - by FILIaS
    Im new in programming c with arrays and files. Im just trying to run the following code but i get warnings like that: 23 44 warning: assignment makes pointer from integer without a cast Any help? It might be silly... but I cant find what's wrong. #include<stdio.h> FILE *fp; FILE *cw; char filename_game[40],filename_words[40]; int main() { while(1) { /* Input filenames. */ printf("\n Enter the name of the file with the cryptwords array: \n"); gets(filename_game); printf("\n Give the name of the file with crypted words:\n"); gets(filename_words); /* Try to open the file with the game */ if (fp=fopen("crypt.txt","r")!=NULL) //line23 { printf("\n Successful opening %s \n",filename_game); fclose(fp); puts("\n Enter x to exit,any other to continue! \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_game); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } /* Try to open the file with the names. */ if (cw=fopen("words.txt","r")!=NULL) //line 44 { printf("\n Successful opening %s \n",filename_words); fclose(cw); puts("\n Enter x to exit,any other to continue \n "); if ( (getc(stdin))=='x') break; else continue; } else { fprintf(stderr,"ERROR!%s \n",filename_words); puts("\n Enter x to exit,any other to continue! \n"); if (getc(stdin)=='x') break; else continue; } } return 0; }

    Read the article

  • Function-Local Static Const variable Initialization semantics.

    - by Hassan Syed
    The questions are in bold, for those that cannot be bothered reading a question in depth. This is a followup to this question. It is to do with the initialization semantics of static variables in functions. Static variables should be initialized once, and their internal state might be altered later - as I (currently) do in the linked question. However, the code in question does not require the feature to change the state of the variable later. Let me clarrify my position, since I don't require the string object's internal state to change. The code is for a trait class for meta programming, and as such would would benifit from a const char * const ptr -- thus Ideally a local cost static const variable is needed. My educated guess is that in this case the string in question will be optimally placed in memory by the link-loader, and that the code is more secure and maps to the intended semantics. This leads to the semantics of such a variable "The C++ Programming language Third Edition -- Stroustrup" does not have anything (that I could find) to say about this matter. All that is said is that the variable is initialized once when the flow of control of the thread first reaches the code. This leads me to ponder if the following code would be sensible, and if not what are the intended semantics ?. #include <iostream> const char * const GetString(const char * x_in) { static const char * const x = x_in; return x; } int main() { const char * const temp = GetString("yahoo"); std::cout << temp << std::endl; const char * const temp2 = GetString("yahoo2"); std::cout << temp2 << std::endl; } The following compiles on GCC and prints "yahoo" twice. Which is what I want -- However it might not be standards compliant (which is why I post this question). It might be more elegant to have two functions, "SetString" and "String" where the latter forwards to the first. If it is standards compliant does someone know of a templates implementation in boost (or elsewhere) ?

    Read the article

  • An interesting case of delete and destructor (C++)

    - by Viet
    I have a piece of code where I can call destructor multiple times and access member functions even the destructor was called with member variables' values preserved. I was still able to access member functions after I called delete but the member variables were nullified (all to 0). And I can't double delete. Please kindly explain this. Thanks. #include <iostream> using namespace std; template <typename T> void destroy(T* ptr) { ptr->~T(); } class Testing { public: Testing() : test(20) { } ~Testing() { printf("Testing is being killed!\n"); } int getTest() const { return test; } private: int test; }; int main() { Testing *t = new Testing(); cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; delete t; cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; //delete t; // <======== Don't do it! Double free/delete! cout << "t->getTest() = " << t->getTest() << endl; return 0; }

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >