Search Results

Search found 14639 results on 586 pages for 'coding environment'.

Page 509/586 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Call ASP.NET 2.0 Server side code from Javascript

    - by Kannabiran
    I'm struggling with this for the past 3 days. I need to call asp.net serverside code from Javascript when the user closes the browser. I'm using the following code to accomplish this. In my asp.net form I have various validation controls. Even if there are some validation errors, When I close the form the server side code works perfectly in my development box(windows 7). But the same code doesnt work in my production environment(windows server). Does it have something to do with the Validation summary or Validation controls. The button control has Causes validation set to false. So even if there is a validation error still my form will post back. Am I correct? I suspect the form is not getting post back to the server when there is a validation error. But i'm disabling all the validation controls in the javascript before calling the button click event. Can someone throw some light on this issue. There are few blogs which suggests to use JQUERY, AJAX (Pagemethods and script manager). function ConfirmClose(e) { var evtobj = window.event ? event : e; if (evtobj == e) { //firefox if (!evtobj.clientY) { evtobj.returnValue = message; } } else { //IE if (evtobj.clientY < 0) { DisablePageValidators(); document.getElementById('<%# buttonBrowserCloseClick.ClientID %>').click(); } } } function DisablePageValidators() { if ((typeof (Page_Validators) != "undefined") && (Page_Validators != null)) { var i; for (i = 0; i < Page_Validators.length; i++) { ValidatorEnable(Page_Validators[i], false); } } } //HTML <div style="display:none" > <asp:Button ID="buttonBrowserCloseClick" runat="server" onclick="buttonBrowserCloseClick_Click" Text="Button" Width="141px" CausesValidation="False" /> //Server Code protected void buttonBrowserCloseClick_Click(object sender, EventArgs e) { //Some C# code goes here }

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • How to disable server-side caching on IIS 7.5 (asp net mvc3)

    - by troebr
    I'm struggling with my IIS setup regarding caching, here's a brief description of my problem: I'm making a site for mobile and non-mobile, sharing the same controllers. IE: mysite/page will serve either mysite/page.cshtml, or mysite/M/page.cshtml, depending on the device. Here's the catch, it worked fine with my local and integration environment (cassiini and iis 6), but on another machine (2008r2/iis 7.5), apparently there is an aggressive server-side caching policy: If I access the website from a desktop machine, I have the correct pages (desktop version) If now I use my mobile phone to access the site, I will have the desktop version, (which implies a server-side cache, my phone is not using the same network). On the contrary, if I were to restart the server and access the site using my phone first, then I will get the mobile version on my desktop (only for the pages I already visited of course). I tried 2 solutions so far: Disabling OutputCache from my Web.config: <httpModules> [..] <remove name="OutputCache" /> </httpModules> And unchecking "Enable output cache" in "Output Caching" for my site in IIS. What's bugging me is that I do not have this problem with my other server (iis 6.0), although caching is enabled on this one, which leads me to think it is related to iis 7 caching addition. My question is simple: how does one disable server-side caching on IIS 7.5? Thanks in advance for your iis lights!

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • Pointers, am I doing them correctly? Objective-c/cocoa

    - by Chris
    I have this in my @interface struct track currentTrack; struct track previousTrack; int anInt; Since these are not objects, I do not have to have them like int* anInt right? And if setting non-object values like ints, boolean, etc, I do not have to release the old value right (assuming non-GC environment)? The struct contains objects: typedef struct track { NSString* theId; NSString* title; } *track; Am I doing that correctly? Lastly, I access the struct like this: [currentTrack.title ...]; currentTrack.theId = @"asdf"; //LINE 1 I'm also manually managing the memory (from a setter) for the struct like this: [currentTrack.title autorelease]; currentTrack.title = [newTitle retain]; If I'm understanding the garbage collection correctly, I should be able to ditch that and just set it like LINE 1 (above)? Also with garbage collection, I don't need a dealloc method right? If I use garbage collection does this mean it only runs on OS 10.5+? And any other thing I should know before I switch to garbage collected code? Sorry there are so many questions. Very new to objective-c and desktop programming. Thanks

    Read the article

  • What to use for version control with Visual Studio 2008 for inhouse projects?

    - by Boog
    We want to put a number of our in-house projects under version control. Our projects are C# .NET applications and assemblies. We originally decided to go Microsoft all the way (as is the norm around here), and tried installing Visual Studio Team Foundation Server. To say the least, it was way more trouble of trying to get a successful install than it's worth, and I'm afraid we don't have a entire server to dedicate to TFS itself, as the installer seems to insist. We also considered a generic solution like Subversion or CVS, or possibly some kind of free online hosting that doesn't make our source publicly available under some license like Google Code appears to. Does anyone have any suggestions for us that would best fit our in-house MS environment? We'd also get some benefit out of some kind of project management tools if they were nicely integrated in to the solution, but this would only be a perk. I should also mention that we haven't entirely ruled out TFS, but it's looking like a pain, so anything you guys have to say for or against it would be helpful.

    Read the article

  • How to add deploy.jar to classpath?

    - by dma_k
    I am facing the problem: I need to add ${java.home}/lib/deploy.jar JAR file to classpath in the runtime (dynamically from java). The solution with Thread#setContextClassLoader(ClassLoader) (mentioned here) does not work because of this bug (if somebody can explain what is really a problem – you are welcome). The solution with -Xbootclasspath/a:"%JAVA_HOME%/jre/lib/deploy.jar" does not work well for me, because I want to have "pure executable jar" as a deliverable: no wrapping scripts please (more over %JAVA_HOME% may not be defined in user's environment in Windows for example, plus I need to write a script per platform) The solution with merging deploy.jar file into my deliverable works only if I make a build on Windows platform. Unfortunately, when the deliverable is produced on build server running on Linux, I got Linux-dependant JAR, which does not execute on Windows – it fails with the trace below. I have read How the Java Launcher Finds Classes and Java programming dynamics: Java classes and class loading articles but I've got no extra ideas, how to correctly handle this situation. Any advices or solutions are very welcomed. Trace: java.lang.NoClassDefFoundError: Could not initialize class com.sun.deploy.config.Config at com.sun.deploy.net.proxy.UserDefinedProxyConfig.getBrowserProxyInfo(UserDefinedProxyConfig.java:43) at com.sun.deploy.net.proxy.DynamicProxyManager.reset(DynamicProxyManager.java:235) at com.sun.deploy.net.proxy.DeployProxySelector.reset(DeployProxySelector.java:59) ... java.lang.NullPointerException at com.sun.deploy.net.proxy.DynamicProxyManager.getProxyList(DynamicProxyManager.java:63) at com.sun.deploy.net.proxy.DeployProxySelector.select(DeployProxySelector.java:166)

    Read the article

  • rabbitmq-erlang-client, using rebar friendly pkg, works on dev env fails on rebar release

    - by lfurrea
    I am successfully using the rebar-friendly package of rabbitmq-erlang-client for a simple Hello World rebarized and OTP "compliant" app and things work fine on the dev environment. I am able to fire up an erl console and do my application:start(helloworld). and connect to the broker, open up a channel and communicate to queues. However, then I proceed to do rebar generate and it builds up the release just fine, but when I try to fire up from the self contained release package then things suddenly explode. I know rebar releases are known to be an obscure art, but I would like to know what are my options as far as deployment for an app using the rabbitmq-erlang-client. Below you will find the output of the console on the crash: =INFO REPORT==== 18-Dec-2012::16:41:35 === application: session_record exited: {{{badmatch, {error, {'EXIT', {undef, [{amqp_connection_sup,start_link, [{amqp_params_network,<<"guest">>,<<"guest">>,<<"/">>, "127.0.0.1",5672,0,0,0,infinity,none, [#Fun<amqp_auth_mechanisms.plain.3>, #Fun<amqp_auth_mechanisms.amqplain.3>], [],[]}], []}, {supervisor2,do_start_child_i,3, [{file,"src/supervisor2.erl"},{line,391}]}, {supervisor2,handle_call,3, [{file,"src/supervisor2.erl"},{line,413}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,588}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}}}}, [{amqp_connection,start,1, [{file,"src/amqp_connection.erl"},{line,164}]}, {hello_qp,start_link,0,[{file,"src/hello_qp.erl"},{line,10}]}, {session_record_sup,init,1, [{file,"src/session_record_sup.erl"},{line,55}]}, {supervisor_bridge,init,1, [{file,"supervisor_bridge.erl"},{line,79}]}, {gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,227}]}]}, {session_record_app,start,[normal,[]]}} type: permanent

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Google App Engine Java app couldn't find javac ?

    - by Frank
    I'm learning to use Google App Engine, I installed it in Netbeans, the project works, but when I clicked on "Deploy To Google App Engine", I got the following error : Beginning server interaction for ... 0% Creating staging directory 5% Scanning for jsp files. 8% Compiling jsp files. 11% Compiling java files. Error Details: Apr 20, 2010 3:51:23 PM org.apache.jasper.JspC processFile INFO: Built File: \PayPal_Monitor.jsp java.lang.IllegalStateException: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Unable to update app: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Please see the logs [C:\Users\NM\AppData\Local\Temp\appcfg3946701335172983337.log] for further information. The file "javac.exe" is in : C:\Program Files (x86)\Java\jdk1.6.0_18\bin How can I add it to "java.home" ? I'm using Win Vista, and I tried to add it from "System - Environment Variables", but there is no "java.home" in there. Where can I find it ? Frank

    Read the article

  • Automatic Deployment of Windows Application

    - by dileepkrishnan
    Hi, We have setup continuos integration in our development environment using SVN, CC.Net, MSBuild and Nunit. Now, we want to automate the process of moving (copying) builds from one stage to another like this: Whenever a new build succeeds in Dev, that should be copied automatically to the QA server (a folder on the QA server, to be exact) Whenever a QA build succeeds tests in QA, that QA build should be copied to the UAT server (a folder on the UAT server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when QA succeeds. Whenever a UAT build succeeds tests in UAT, that should be copied to the PROD server (a folder on the PROD server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when UAT succeeds. How do I implement this? Can this be done using CC.Net alone? Or, can this be done using MSBuild? Or, do I need to employ both? Please advise what exactly needs to be done. Thanks Dileep Krishnan

    Read the article

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • How to install Zend Framework on Windows

    - by sombe
    "installing Zend Framework is so easy!!!!" yeah right... Ok I'm working with a beginner's book and the ONE thing that is not excessively detailed is the most important part: Installing the darn thing. After browsing the quickstart guide for hours, all it said was: "download Zend [...] add the include directory (bla bla) and YOU'RE DONE!" right, i'm done using Zend. Ok, not really, not yet anyway. I beg of you people, I wanna go to bed, please tell me how (in simple 6th grade detail) to install the framework. I've got the unzipped folder in my htdocs directory, and I placed zf.bat+zf.php in the htdocs root. What's next? thank you so much. EDIT: Thanks guys for all the answers. Unfortunately I haven't been able to work with this or find a good enough resource to explain it to me in plain english. It seems that this framework adheres more so to programmers than to beginners. I've since yesterday read a little on CakePHP and found that it was incredibly easy to install and tune. As oppose to Zend Framework, where I had to dig in my "environment variables", configure "httpd.conf" and almost tie the knot between my computer driver cables to just get it running, CakePHP has already allowed me to put together a nice newbie application. In conclusion, I very much appreciate all of your help. I hope someone else venturing on ZF will be more successful with it. Thanks!

    Read the article

  • Unknown and unreproducible crash causes App Store rejection

    - by Daniel Johnson
    After submitting our application several times, we continue to receive the following response: Thank you for submitting My App to the App Store. We've reviewed your application and determined that we cannot post this version of your iPad application to the App Store because My App is crashing on iPad running iPhone OS 3.2 and Mac OS X 10.6.2. My App crashes upon launch. Unfortunately, crash logs have not been generated. However, resigning the same build with the AdHoc entitlements and loading the build onto the device yields no such crash. After a number of attempts, the application simply does not crash as reported by the reviewer. Furthermore, the reviewer does not provide any useful logging that may have been generated by SpringBoard such as an exit status or event if it had worked properly for any other device. There are no calls to explicitly exit or quit the application in the code line and yet the application terminates on startup. What might cause an application to terminate in such a manner? Under what conditions is an application tested that might not be found under a development environment? Could it be a result of a signing issue that the submission validation system is simply unable to catch? Thanks in advance.

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

  • I am not able to kill a child process using TerminateProcess

    - by user1681210
    I have a problem to kill a child process using TerminateProcess. I call to this function and the process still there (in the Task Manager) This piece of code is called many times launching the same program.exe many times and these process are there in the task manager which i think is not good. sorry, I am quiet new in c++ I will really appreciate any help. thanks a lot!! the code is the following: STARTUPINFO childProcStartupInfo; memset( &childProcStartupInfo, 0, sizeof(childProcStartupInfo)); childProcStartupInfo.cb = sizeof(childProcStartupInfo); childProcStartupInfo.hStdInput = hFromParent; // stdin childProcStartupInfo.hStdOutput = hToParent; // stdout childProcStartupInfo.hStdError = hToParentDup; // stderr childProcStartupInfo.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW; childProcStartupInfo.wShowWindow = SW_HIDE; PROCESS_INFORMATION childProcInfo; /* for CreateProcess call */ bOk = CreateProcess( NULL, // filename pCmdLine, // full command line for child NULL, // process security descriptor */ NULL, // thread security descriptor */ TRUE, // inherit handles? Also use if STARTF_USESTDHANDLES */ 0, // creation flags */ NULL, // inherited environment address */ NULL, // startup dir; NULL = start in current */ &childProcStartupInfo, // pointer to startup info (input) */ &childProcInfo); // pointer to process info (output) */ CloseHandle( hFromParent ); CloseHandle( hToParent ); CloseHandle( hToParentDup ); CloseHandle( childProcInfo.hThread); CloseHandle( childProcInfo.hProcess); TerminateProcess( childProcInfo.hProcess ,0); //this is not working, the process thanks

    Read the article

  • How to build a simulation of a login hardware token in .Net

    - by Michel
    Hi, i have a hardware token for remote login to some citrix environment. When i click the button on the device, i get an id and i can use that to login to the citrix farm. I can click the button as much as i like, and every time a new code gets generated, and they all work. Now i want to secure my private website likewise, but not with the hardware token, but with a 'token app' on my phone. So i run an app on my phone, generate a key, and use that to (partly) authenticate myself on the server. But here's the point: i don't know how it works! How can i generate 1, 2 or 100 keys at one time which i can see (on the server) are all valid, but without the server and the phone app having contact (the hardware token also is an 'offline' solution). Can you help me with a hint how i would do this? This is what i thought of so far: the phone app and the server app know (hardcoded) the same encryption key. The phone app encrypts the current time. The server app decrypts the string to the current time and if the diff between that time and the actual server time is less than 10 minutes it's an ok. Difficult for other users to fake a key, but encryption gives such nasty strings to enter, and the hardware token gives me nice things like 'H554TU8' And this is probably not how the real hardware token works, because the server and the phone app must 'know' the same encryption key. Michel

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >