Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 281/335 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • JqueryUI dialog box causes button to lose styling

    - by superexsl
    Hey everyone, I'm using JQueryUI dialog boxes and, while it works, it seems to remove any CSS styling on the button that opens the dialog. I have to manually refresh the page to get the styling back. An example button which launches the dialog: (button_submit is my CSS theme and launc_popup is used to detect the button click in JQuery. <asp:Button ID="btnLaunch" CssClass="button_sub launch_popup" runat="server" Text="Dialog..." CausesValidation="false" OnClientClick="return false;" /> My jquery: $('.launch_popup').click(function() { $("#dialog-form").dialog("open"); }); My dialog-form: $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 300, modal: true, buttons: { 'Close': function() { $(this).dialog('close'); } } }); This happens to all the buttons that open dialogs. Is there a way to 'restyle' the button without refreshing the page? (It's in an updatepanel if that makes a difference, although this bit's client-side so not sure if that should affect it). Thanks for any help

    Read the article

  • steps to fix a project that won't compile

    - by eco_bach
    Hi Pulling my hair out in trying to get a simple window based project to compile. I am running both 3.2.2 and 3.2.3 versions of Xcode. The latter is set up in a separate folder. Originally I used the latter to create and compile my project against the new 4.0 sdk. It compiled fine. Then I made the mistake of deleting some sdks I thought I no longer needed. Ever since I can no longer compile. Right now I get a dozen or so errors similar to the following "_OBJC_CLASS_$_CATransition", referenced from: objc-class-ref-to-CATransition in ViewTransitionsAppDelegate.o My active executable is the iphone simulator 4 and Base SDK is iPhone device 3.0. I tried reinstalling the xcode3.2.3 installer, no difference. I'm totally stymied, as my project WAS working and compiling fine, both to the simulator and external device. Are there any best practices or recommended steps in fixing or rebuilding a project when it won't compile? Any help welcome!

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • Oracle DBMS_PROFILER only shows Anonymous in the results tables

    - by Greg Reynolds
    I am new to DBMS_PROFILER. All the examples I have seen use a simple top-level procedure to demonstrate the use of the profiler, and from there get all the line numbers etc. I deploy all code in packages, and I am having great difficulty getting my profile session to populate the plsql_profiler_units with useful data. Most of my runs look like this: RUNID RUN_COMMENT UNIT_OWNER UNIT_NAME SECS PERCEN ----- ----------- ----------- -------------- ------- ------ 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler I have just embedded the calls to the dbms_profiler.start_profiler, flush_data and stop_profiler as per all the examples. The main difference is that my code is in a package, and calls in to other package. Do you have to profile every single stored procedure in your call stack? If so that makes this tool a little useless! I have checked http://www.dba-oracle.com/t_plsql_dbms_profiler.htm for hints, among other similar sites.

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • Persistant Http client connections in java

    - by Akusete
    I am trying to write a simple Http client application in Java and am a bit confused by the seemingly different ways to establish Http client connections, and efficiently re-use the objects. Current I am using the following steps (I have left out exception handling for simplicity) Iterator<URI> uriIterator = someURIs(); HttpClient client = new DefaultHttpClient(); while (uriIterator.hasNext()) { URI uri = uriIterator.next(); HttpGet request = new HttpGet(uri); HttpResponse response = client.execute(request); HttpEntity entity = response.getEntity(); InputStream s = entity.getContent(); processStream (); s.close(); } In regard to the code above, my questions is: Assuming all URI's are pointing to the same host (but different resources on that host). What is the recommended way to use a single http connection for all requests? And how do you close the connection after the last request? --edit: Also what is the difference between using uri.openConnection(), versus HttpClient? Which is preferable, and what other methods exist?

    Read the article

  • rails validation of presence not failing on nil

    - by holden
    I want to make sure an attibute exists, but it seems to still slip thru and I'm not sure how better to check for it. This should work, but doesn't. It's a attr_accessor and not a real attribute if that makes a difference. validates_presence_of :confirmed, :rooms {"commit"=>"Make Booking", "place_id"=>"the-kosmonaut", "authenticity_token"=>"Tkd9bfGqYFfYUv0n/Kqp6psXHjLU7CmX+D4UnCWMiMk=", "utf8"=>"✓", "booking"=>{"place_id"=>"6933", "bookdate"=>"2010-11-22", "rooms"=>[{}], "no_days"=>"2"}} Not sure why my form_for returns a blank hash in an array... <% form_for :booking, :url => place_bookings_path(@place) do |f| %> <%= f.hidden_field :bookdate, { :value => user_cart.getDate } %> <%= f.hidden_field :no_days, { :value => user_cart.getDays } %> <% for room in pricing_table(@place.rooms,@valid_dates) %> <%= select_tag("booking[rooms][][#{room.id}]", available_beds(room)) %> <% end %> <% end %>

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • vertical align of some inline-block divs with different content

    - by Jan Möller
    i want to center some inline-block divs. I want to create a responsive design, so if the screen size is too small, the horizontal elements should be under each other. How can i center them vertical, so they are side by side without a difference in height? (See fiddle). Moveover those elements should be verticaly centered, if the screen size is too small. http://jsfiddle.net/5dpRs/52/ CSS .repeat { display:inline-block; border-style:solid; border-width:2px; height:50px; width:50px; } #content { border-style:solid; border-width:2px; text-align:center; } HTML <div id="content"> <div class="repeat"> <p>hello</p> </div> <div class="repeat"> </div> </div> Thank you :)

    Read the article

  • MySQL: Get average of time differences?

    - by Nebs
    I have a table called Sessions with two datetime columns: start and end. For each day (YYYY-MM-DD) there can be many different start and end times (HH:ii:ss). I need to find a daily average of all the differences between these start and end times. An example of a few rows would be: start: 2010-04-10 12:30:00 end: 2010-04-10 12:30:50 start: 2010-04-10 13:20:00 end: 2010-04-10 13:21:00 start: 2010-04-10 14:10:00 end: 2010-04-10 14:15:00 start: 2010-04-10 15:45:00 end: 2010-04-10 15:45:05 start: 2010-05-10 09:12:00 end: 2010-05-10 09:13:12 ... The time differences (in seconds) for 2010-04-10 would be: 50 60 300 5 The average for 2010-04-10 would be 103.75 seconds. I would like my query to return something like: day: 2010-04-10 ave: 103.75 day: 2010-05-10 ave: 72 ... I can get the time difference grouped by start date but I'm not sure how to get the average. I tried using the AVG function but I think it only works directly on column values (rather than the result of another aggregate function). This is what I have: SELECT TIME_TO_SEC(TIMEDIFF(end,start)) AS timediff FROM Sessions GROUP BY DATE(start) Is there a way to get the average of timediff for each start date group? I'm new to aggregate functions so maybe I'm misunderstanding something. If you know of an alternate solution please share. I could always do it ad hoc and compute the average manually in PHP but I'm wondering if there's a way to do it in MySQL so I can avoid running a bunch of loops. Thanks.

    Read the article

  • What should a self-taught programmer with no degree learn/read?

    - by sjbotha
    I am a self-taught programmer and I do do not have any degrees. I started pretty young and I've got about 7 years of actual programming work experience. I believe I'm a pretty good programmer, but I admit that I have not played much with algorithms or delved into any really low-level aspects of programming such as how compilers work. I have worked with other programmers with and without degrees. Some were good and some not; having a degree didn't seem to make any difference as to which pot they fell into. Since then I've come to realize that it does depend on the school where the degree is obtained. Some people suggest that you really should get a degree; that there are things you'll learn in the process that you won't learn in the real world. Of course there is personal growth and discipline learned from completing a task of that magnitude, but let's just concentrate on the technical knowledge. What would I have been taught in a GOOD CS course that would aid me today and what can I read to fill the gap? I've heard the book "Algorithms" mentioned and I plan on reading that. What other books would you recommend? Edit: Clarification on 'actual work experience': Have worked for 2 small companies on teams with fewer than 5 people. About 2 years experience with Perl, Python, PHP, C, C++. About 5 years experience in Java, Applets, RMI, T-SQL, PL/SQL, VB6. 7 years experience in HTML, Javascript, bash, SQL. Most recently in Java designed and helped build an N-tier Java app with web frontend and RMI.

    Read the article

  • How best to associate code with events in jQuery?

    - by Ned Batchelder
    Suppose I have a <select> element on my HTML page, and I want to run some Javascript when the selection is changed. I'm using jQuery. I have two ways to associate the code with the element. Method 1: jQuery .change() <select id='the_select'> <option value='opt1'>One</option> <option value='opt2'>Two</option> </select> <script> $('#the_select').change(function() { alert('Changed!'); }); </script> Method 2: onChange attribute <select id='the_select' onchange='selectChanged();'> <option value='opt1'>One</option> <option value='opt2'>Two</option> </select> <script> function selectChanged() { alert('Changed!'); } </script> I understand the different modularity here: for example, method 1 keeps code references out of the HTML, but method 2 doesn't need to mention HTML ids in the code. What I don't understand is: are there operational differences between these two that would make me prefer one over the other? For example, are there edge-case browsers where one of these works and the other doesn't? Is the integration with jQuery better for either? Does the late-binding of the code to the event make a difference in the behavior of the page? Which do you pick and why?

    Read the article

  • Building Stored Procedure to group data into ranges with roughly equal results in each bucket

    - by Len
    I am trying to build one procedure to take a large amount of data and create 5 range buckets to display the data. the buckets ranges will have to be set according to the results. Here is my existing SP GO /****** Object: StoredProcedure [dbo].[sp_GetRangeCounts] Script Date: 03/28/2010 19:50:45 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_GetRangeCounts] @idMenu int AS declare @myMin decimal(19,2), @myMax decimal(19,2), @myDif decimal(19,2), @range1 decimal(19,2), @range2 decimal(19,2), @range3 decimal(19,2), @range4 decimal(19,2), @range5 decimal(19,2), @range6 decimal(19,2) SELECT @myMin=Min(modelpropvalue), @myMax=Max(modelpropvalue) FROM xmodelpropertyvalues where modelPropUnitDescriptionID=@idMenu set @myDif=(@myMax-@myMin)/5 set @range1=@myMin set @range2=@myMin+@myDif set @range3=@range2+@myDif set @range4=@range3+@myDif set @range5=@range4+@myDif set @range6=@range5+@myDif select @myMin,@myMax,@myDif,@range1,@range2,@range3,@range4,@range5,@range6 select t.range as myRange, count(*) as myCount from ( select case when modelpropvalue between @range1 and @range2 then 'range1' when modelpropvalue between @range2 and @range3 then 'range2' when modelpropvalue between @range3 and @range4 then 'range3' when modelpropvalue between @range4 and @range5 then 'range4' when modelpropvalue between @range5 and @range6 then 'range5' end as range from xmodelpropertyvalues where modelpropunitDescriptionID=@idmenu) t group by t.range order by t.range This calculates the min and max value from my table, works out the difference between the two and creates 5 buckets. The problem is that if there are a small amount of very high (or very low) values then the buckets will appear very distorted - as in these results... range1 2806 range2 296 range3 75 range5 1 Basically I want to rebuild the SP so it creates buckets with equal amounts of results in each. I have played around with some of the following approaches without quite nailing it... SELECT modelpropvalue, NTILE(5) OVER (ORDER BY modelpropvalue) FROM xmodelpropertyvalues - this creates a new column with either 1,2,3,4 or 5 in it ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range1 and @range2 ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range2 and @range3 or maybe i could allocate every record a row number then divide into ranges from this?

    Read the article

  • Why does System.Net.Mail work in one part of my c#.net web app, but not in another?

    - by Marc
    I have a web application that is running on IIS within my company's domain, and is being accessed via intranet. I have this application sending out email based on some user actions. For example, its a scheduling application in part, so if a task is completed, an email is sent out notifying other users of that. The problem is, the email works flawlessly in some cases, and not at all in others. I have a login.aspx page which sends out report emails when the page is loaded (its loaded once a day via windows task scheduler) - this always seems to work perfectly. I have an update page which is supposed to send email when text is entered and the "Update" button is clicked - this operation will fail most of the time. Both of these tasks use the same static overloaded method I wrote to send email using System.Net.Mail. I have tried using gmail as my smtp server (instead of our internal one), and get the same results. I investigated whether having the local SMTP Service running makes any difference, and it doesn't seem to. Besides, since C# is server-side code, shouldn't it only matter whats running on the server, and not the client? Please help me figure out whats wrong! Where should I look? What can I try?

    Read the article

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • Strange build issue using Flex Mojo. Looking for troubleshooting suggestions.

    - by WeeJavaDude
    I have ran into a strange issue and I was hoping for some suggeestion on how to attack the problem. Here is the environment. 1) We develop locally using Flex Builder. 2) We use QuickBuild with FlexMojo 3.4.2 for test builds and production 3) In both cases we don't believe optimization is enabled. What we are seeing is some strange behavior relating to the Ctrl-Enter key when doing testing on IE only in our test environment but not locally. By copying some files over locally I have narrowed the issue down to swf files differences. We do see a difference in the size of swf files in our test environment vs our local environments. Couple things that would help me in troubleshooting would be. 1) Is there a way to know what exactly is in the SWF file? What SWCs are included. 2) How does one compare compile settings between a maven mojo configuration and Flex IDE envioronment? Any thoughts or opinions would be very helpful.

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • Initializing objects on the fly

    - by pocoa
    I have a vector called players and a class called Player. And what I'm trying to do is to write: players.push_back(Player(name, Weapon(bullets))); So I want to be able to create players in a loop. But I see an error message says "no matching function for call Player::Player..." Then I've changed that to: Weapon w(bullets); Player p(name, w); players.push_back(p); Here is my Player definition: class Player { public: Player(string &name, Weapon &weapon); private string name; Weapon weapon; } I'm just trying to learn what is the difference between these definitions. And is this the right way to pass an object to an object constructor. Note: These are not my actual class definitions. I'm just trying to learn something about object oriented programming in C++ with coding it. I mean I know that Weapon should be initialized in Player :)

    Read the article

  • Appending facts into an existing prolog file.

    - by vuj
    Hi, I'm having trouble inserting facts into an existing prolog file, without overwriting the original contents. Suppose I have a file test.pl: :- dynamic born/2. born(john,london). born(tim,manchester). If I load this in prolog, and I assert more facts: | ?- assert(born(laura,kent)). yes I'm aware I can save this by doing: |?- tell('test.pl'),listing(born/2),told. Which works but test.pl now only contains the facts, not the ":- dynamic born/2": born(john,london). born(tim,manchester). born(laura,kent). This is problematic because if I reload this file, I won't be able to insert anymore facts into test.pl because ":- dynamic born/2." doesn't exist anymore. I read somewhere that, I could do: append('test.pl'),listing(born/2),told. which should just append to the end of the file, however, I get the following error: ! Existence error in user:append/1 ! procedure user:append/1 does not exist ! goal: user:append('test.pl') Btw, I'm using Sicstus prolog. Does this make a difference? Thanks!

    Read the article

  • Why ntp settle time toe long?

    - by fercis
    I simply try to set-up NTP to my system. Both server and clients will run on my computer which are linked together with local link. One of them will have the reference clock. Both the server and Client are linux Ubuntu. I install ntp daemon to both sides. In clients, I enter the ip address of the server to /etc/ntp.conf. Everything works fine. However, the setting of the time to correct time in client side takes too long time (around 17 minutes). Is it possible to gather correct time just at startup. I write some code that regularly calls "ntpdate " by system call and the problem is solved but there has to be something that allows me to decrease the poll time of the client to 1-2 minutes. There are some settings as maxpoll - minpoll in ntp.conf, but I couldn't manage to understand their function, because with the best configuration (minpoll 4? 16 seconds?) I also cannot see that client side corrects its time before 10 minutes. In addition, in some cases my client is an embedded system (ARM - IGEP board) and it always opens with an irrelevant date (2-3 years ago). So the time that takes to correct the time should not depend on the time difference also.

    Read the article

  • Private member vector of vector dynamic memory allocation

    - by Geoffroy
    Hello, I'm new to C++ (I learned programming with Fortran), and I would like to allocate dynamically the memory for a multidimensional table. This table is a private member variable : class theclass{ public: void setdim(void); private: std::vector < std::vector <int> > thetable; } I would like to set the dimension of thetable with the function setdim(). void theclass::setdim(void){ this->thetable.assign(1000,std::vector <int> (2000)); } I have no problem compiling this program, but as I execute it, I've got a segmentation fault. The strange thing for me is that this piece (see under) of code does exactly what I want, except that it doesn't uses the private member variable of my class : std::vector < std::vector < int > > thetable; thetable.assign(1000,std::vector <int> (2000)); By the way, I have no trouble if thetable is a 1D vector. In theclass : std::vector < int > thetable; and if in setdim : this->thetable.assign(1000,2); So my question is : why is there such a difference with "assign" between thetable and this-thetable for a 2D vector? And how should I do to do what I want? Thank-you for your help, Best regards, -- Geoffroy

    Read the article

  • PropertyPlaceholderConfigurer vs Filters -- Spring Beans

    - by John
    Hi there. I've got a question regarding the difference between PropertyPlaceholderConfigurer (org.springframework.beans.factory.config.PropertyPlaceholderConfigurer) and normal filters defined in my pom.xml. I've been looking at examples, and it seems that even though filters are defined and marked to be active by default in the pom.xml they still make use of PropertyPlaceholderConfigurer in Spring's applicationContext.xml. This means that the pom.xml has a reference to a filter-LOCAL.properties while applicationContext.xml has a reference to application.properties and they both contain the same settings. Why is that? Is that how it is supposed to be done? I'm able to run the goal mvn jetty:run without the application.properties present, but if I add settings to the application.properties that differ from the filter-LOCAL.properties they don't seem to override. Here's an example of what I mean: pom.xml <profiles <profile <idLOCAL <activation <activeByDefaulttrue </activation <properties <envLOCAL </properties </profile </profiles applicationContext.xml <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" <property name="locations" <list <valueclasspath:application.properties </list </property <property name="ignoreResourceNotFound" value="true"/ </bean <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" <property name="driverClassName" value="${jdbc.driver}"/ <property name="url" value="${jdbc.url}"/ <property name="username" value="${jdbc.username}"/ <property name="password" value="${jdbc.password}"/ </bean an example of the content of application.properties and filters-LOCAL.properties jdbc.driver=org.postgresql.Driver jdbc.url=jdbc:postgresql://localhost/shoutbox_dev jdbc.username=tester jdbc.password=tester Can I remove the propertyConfigurer from the applicationContext, create a PROD filter and disregard the application.properties file, or will that give me issues when deploying to the production server?

    Read the article

  • How can I pass a const array or a variable array to a function in C?

    - by CSharperWithJava
    I have a simple function Bar that uses a set of values from a data set that is passed in in the form of an Array of data structures. The data can come from two sources: a constant initialized array of default values, or a dynamically updated cache. The calling function determines which data is used and should be passed to Bar. Bar doesn't need to edit any of the data and in fact should never do so. How should I declare Bar's data parameter so that I can provide data from either set? union Foo { long _long; int _int; } static const Foo DEFAULTS[8] = {1,10,100,1000,10000,100000,1000000,10000000}; static Foo Cache[8] = {0}; void Bar(Foo* dataSet, int len);//example function prototype Note, this is C, NOT C++ if that makes a difference; Edit Oh, one more thing. When I use the example prototype I get a type qualifier mismatch warning, (because I'm passing a mutable reference to a const array?). What do I have to change for that?

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >