Search Results

Search found 25386 results on 1016 pages for 'zend test'.

Page 441/1016 | < Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >

  • SugarmCRM REST API always returns "null"

    - by TuomasR
    I'm trying to test out the SugarCRM REST API, running latest version of CE (6.0.1). It seems that whenever I do a query, the API returns "null", nothing else. If I omit the parameters, then the API returns the API description (which the documentation says it should). I'm trying to perform a login, passing as parameter the method (login), input_type and response_type (json) and rest_data (JSON encoded parameters). The following code does the query: $api_target = "http://example.com/sugarcrm/service/v2/rest.php"; $parameters = json_encode(array( "user_auth" => array( "user_name" => "admin", "password" => md5("adminpassword"), ), "application_name" => "Test", "name_value_list" => array(), )); $postData = http_build_query(array( "method" => "login", "input_type" => "json", "response_type" => "json", "rest_data" => $parameters )); echo $parameters . "\n"; echo $postData . "\n"; echo file_get_contents($api_target, false, stream_context_create(array( "http" => array( "method" => "POST", "header" => "Content-Type: application/x-www-form-urlencoded\r\n", "content" => $postData ) ))) . "\n"; I've tried different variations of parameters and using username instead of user_name, and all provide the same result, just a response "null" and that's it.

    Read the article

  • prolog recursion

    - by AhmadAssaf
    am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer .. thank you the problem is that am trying to find all possible distributions for a list into other lists .. the code test :- bp(3,12,[7, 3, 5, 4, 6, 4, 5, 2], Answer), format("Answer = ~w\n",[Answer]). bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). addIn(_,[],Result,Result). addIn(C,[Element|Rest],[F|R],Result):- member( Members , [F|R]), sumlist( Members, Sum), sumlist([Element],ElementLength), Cap is Sum + ElementLength, (Cap =< C, append([Element], Members,New), insert( Members, New, [F|R], PartialResult), addIn(C,Rest,PartialResult,Result)). by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like bp(3,11,[8,2,4,6,1,8,4],Answer). it will just enter a while loop .. more over if i changed the bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). to and instead of Or .. i get error : ERROR: is/2: Arguments are not sufficiently instantiated appreciate the help .. Thanks alot @hardmath

    Read the article

  • JSF dynamic ui:include

    - by Ray
    In my app I have tutor and student as roles of user. And I decide that main page for both will be the same. But menu will be different for tutors and users. I made to .xhtml page tutorMenu.xhtml and student.xhtml. And want in dependecy from role include menu. For whole page I use layout and just in every page change content "content part" in ui:composition. In menu.xhtml <h:body> <ui:composition> <div class="menu_header"> <h2> <h:outputText value="#{msg['menu.title']}" /> </h2> </div> <div class="menu_content"> <?:if test="#{authenticationBean.user.role.roleId eq '2'}"> <ui:include src="/pages/content/body/student/studentMenu.xhtml"/> </?:if> <?:if test= "#{authenticationBean.user.role.roleId eq '1'}"> <ui:include src="/pages/content/body/tutor/tutorMenu.xhtml" /> </?:if> </div> </ui:composition> I know that using jstl my be not better solution but I can't find other. What is the best decision of my problem?

    Read the article

  • Is it possible to navigate to the parent node of a matched node during XSLT processing?

    - by Darin
    I'm working with an OpenXML document, processing the main document part with some XSLT. I've selected a set of nodes via <xsl:template match="w:sdt"> </xsl:template> In most cases, I simply need to replace that matched node with something else, and that works fine. BUT, in some cases, I need to replace not the w:sdt node that matched, but the closest w:p ancestor node (ie the first paragraph node that contains the sdt node). The trick is that the condition used to decide one or the other is based on data derived from the attributes of the sdt node, so I can't use a typical xslt xpath filter. I'm trying to do something like this <xsl:template match="w:sdt"> <xsl:choose> <xsl:when test={first condition}> {apply whatever templating is necessary} </xsl:when> <xsl:when test={exception condition}> <!-- select the parent of the ancestor w:p nodes and apply the appropriate templates --> <xsl:apply-templates select="(ancestor::w:p)/.." mode="backout" /> </xsl:when> </xsl:choose> </xsl:template> <!-- by using "mode", only this template will be applied to those matching nodes from the apply-templates above --> <xsl:template match="node()" mode="backout"> {CUSTOM FORMAT the node appropriately} </xsl:template> This whole concept works, BUT no matter what I've tried, It always applies the formatting from the CUSTOM FORMAT template to the w:p node, NOT it's parent node. It's almost as if you can't reference a parent from a matching node. And maybe you can't, but I haven't found any docs that say you can't Any ideas?

    Read the article

  • Google AppEngine + Local JUnit Tests + Jersey framework + Embedded Jetty

    - by xamde
    I use Google Appengine for Java (GAE/J). On top, I use the Jersey REST-framework. Now i want to run local JUnit tests. The test sets up the local GAE development environment ( http://code.google.com/appengine/docs/java/tools/localunittesting.html ), launches an embedded Jetty server, and then fires requests to the server via HTTP and checks responses. Unfortunately, the Jersey/Jetty combo spawns new threads. GAE expects only one thread to run. In the end, I end up having either no datstore inside the Jersey-resources or multiple, having different datastore. As a workaround I initialise the GAE local env only once, put it in a static variable and inside the GAE resource I add many checks (This threads has no dev env? Re-use the static one). And these checks should of course only run inside JUnit tests.. (which I asked before: "How can I find out if code is running inside a JUnit test or not?" - I'm not allowed to post the link directly here :-|)

    Read the article

  • Android Dev: The constructor Intent(new View.OnClickListener(){}, Class<DrinksTwitter>) is undefined

    - by Malcolm Woods Spark
    package com.android.drinksonme; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class Screen2 extends Activity { // Declare our Views, so we can access them later private EditText etUsername; private EditText etPassword; private Button btnLogin; private Button btnSignUp; private TextView lblResult; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Get the EditText and Button References etUsername = (EditText)findViewById(R.id.username); etPassword = (EditText)findViewById(R.id.password); btnLogin = (Button)findViewById(R.id.login_button); btnSignUp = (Button)findViewById(R.id.signup_button); lblResult = (TextView)findViewById(R.id.result); // Set Click Listener btnLogin.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // Check Login String username = etUsername.getText().toString(); String password = etPassword.getText().toString(); if(username.equals("test") && password.equals("test")){ final Intent i = new Intent(this, DrinksTwitter.class); //error on this line startActivity(i); // lblResult.setText("Login successful."); } else { lblResult.setText("Invalid username or password."); } } }); final Intent k = new Intent(Screen2.this, SignUp.class); btnSignUp.setOnClickListener(new OnClickListener() { public void onClick(View v) { startActivity(k); } }); } }

    Read the article

  • Compare Long values Struts2

    - by Marquinio
    Hi everyone I'm trying to compare two values using struts2 s:if tag but its not working. If I hardcode the values it works but I want it to be dynamic. The variable stringValue is of type String. The variable currentLongValue is of type Long. <s:set var="stringValue" value="order"/> <s:iterator value="listTest"> <s:set var="currentLongValue" value="value"/> <s:if test="#currentLongValue.toString() == #stringValue" > //Do something </s:if> <s:else> //Do something else </s:else> </s:iterator> For the s:if I have tried toString and also the equals(). It only works if I hardcode the values. Example: <s:if test="#currentLongValue == 1234"> Any clues? Thank you.

    Read the article

  • Multiple exports with MEF does some really heinous stuff -- why, and why is it allowed?

    - by Dave
    I have an interesting situation where I need to do something like this: [Export[typeof(ICandy1)] [Export[typeof(ICandy2)] public class Candy : ICandy2 { ... } where public interface ICandy1 { ... } public interface ICandy2 : ICandy1 { ... } I couldn't find any posts anywhere regarding using multiple [Export] attributes, so I figured, what the hell, might as well try it. At first glance, it actually seemed to work. I have a couple of methods that call into both interfaces of a Candy instance, and it was fine. However, as I started to test the app, I saw that the behavior wasn't right, and when looking at the Output window, I saw that I was getting tons of COMExceptions. I couldn't track down where they were all coming from, but they always occurred when a worker thread was sleeping. I figured that it had to be from the main thread, then, but didn't know how to debug this at all. Nothing should have been going on in the GUI, and I disabled my DispatchTimers just in case -- same thing. Even more strange than the COMExceptions was the really, really erratic behavior when stepping through code. About 30% of the time, when I single stepped, it would pop out of the method, or it would single step over two lines of code! Totally weird stuff that I am not used to seeing. The only thing that changed between working and non-working code was the introduction of MEF through my plugin loading code. So as a test, I changed my plugin assembly to only export one interface, and I hardcoded everything in the app that relied on the other (now not-implemented) interface. And now the COMExceptions are gone, and the weird debugging behavior is gone. Is this something people here have seen before? If MEF is not expected to allow a class to Export multiple interfaces, then shouldn't a CompositionException get raised when composing the parts? Can anyone explain why MEF would cause these weird problems???

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • How can I spec out an authlogic sessions controller using using a stub?

    - by Dave
    I want to test my User Session Controller testing that a user session is first built then saved. My UserSession class looks like this: class UserSession < Authlogic::Session::Base end The create method of my UserSessionsController looks like this: def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Successfully logged in." redirect_back_or_default administer_home_page_url else render :new end end and my controller spec looks like this: describe UserSessionsController do it "should build a new user session" do UserSession.stub!(:new).with(:email, :password) UserSession.should_receive(:new).with(:email => "[email protected]", :password => "foobar") post :create, :user_session => { :email => "[email protected]", :password => "foobar" } end end I stub out the new method but I still get the following error when I run the test: Spec::Mocks::MockExpectationError in 'UserSessionsController should build a new user session' <UserSession (class)> received :new with unexpected arguments expected: ({:password=>"foobar", :email=>"[email protected]"}) got: ({:priority_record=>nil}, nil) It's although the new method is being called on UserSession before my controller code is getting called. Calling activate_authlogic makes no difference.

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Stuck trying to get Log4Net to work with Dependency Injection

    - by Pure.Krome
    I've got a simple winform test app i'm using to try some Log4Net Dependency Injection stuff. I've made a simple interface in my Services project :- public interface ILogging { void Debug(string message); // snip the other's. } Then my concrete type will be using Log4Net... public class Log4NetLogging : ILogging { private static ILog Log4Net { get { return LogManager.GetLogger( MethodBase.GetCurrentMethod().DeclaringType); } } public void Debug(string message) { if (Log4Net.IsDebugEnabled) { Log4Net.Debug(message); } } } So far so good. Nothing too hard there. Now, in a different project (and therefore namesapce), I try and use this ... public partial class Form1 : Form { public Form1() { FileInfo fileInfo = new FileInfo("Log4Net.config"); log4net.Config.XmlConfigurator.Configure(fileInfo); } private void Foo() { // This would be handled with DI, but i've not set it up // (on the constructor, in this code example). ILogging logging = new Log4NetLogging(); logging.Debug("Test message"); } } Ok .. also pretty simple. I've hardcoded the ILogging instance but that is usually dependency injected via the constructor. Anyways, when i check this line of code... return LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType); the DeclaringType type value is of the Service namespace, not the type of the Form (ie. X.Y.Z.Form1) which actually called the method. Without passing the type INTO method as another argument, is there anyway using reflection to figure out the real method that called it?

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • CSSRules is empty

    - by Stephanie
    I have this very simple HTML page, and I'm trying to get the CSSRules of #poulet, but when I'm accessing the documents.styleSheets[0].cssRules I get this error in Chrome v5.0.375.55: Uncaught TypeError: Cannot read property 'length' of null Here is what my code looks like: //HTML FILE window.onload = function(){ var test = findKeyframesRule('poulet'); alert(test); } <div id="poulet"> allo </div> //JS FILE function findKeyframesRule(rule) { var ss = document.styleSheets; for (var i = 0; i < ss.length; ++i) { for (var j = 0; j < ss[i].cssRules.length; ++j) { if (ss[i].cssRules[j].type == window.CSSRule.WEBKIT_KEYFRAMES_RULE && ss[i].cssRules[j].name == rule) return ss[i].cssRules[j]; } } return null; } //CSS FILE html, body{ background: #cccccc; } #poulet{ border: 10px solid pink; } The files can be found here. I really need help on this one, please!!! D:

    Read the article

  • using a stored procedure for login in c#

    - by Jin Yim
    Hi all, If I run a store procedure with two parameter values (admin, admin) (parameters : admin, admin) I get the following message : Session_UID User_Group_Name Sys_User_Name NULLAdministratorsNTMSAdmin No rows affected. (1 row(s) returned) @RETURN_VALUE = 0 Finished running [dbo].[p_SYS_Login]. -- To get the same message in c# I used the code following : string strConnection = Settings.Default.ConnectionString; using (SqlConnection conn = new SqlConnection(strConnection)) { using (SqlCommand cmd = new SqlCommand()) { SqlDataReader rdr = null; cmd.Connection = conn; cmd.CommandText = "p_SYS_Login"; cmd.CommandType = CommandType.StoredProcedure; SqlParameter paramReturnValue = new SqlParameter(); paramReturnValue.ParameterName = "@RETURN_VALUE"; paramReturnValue.SqlDbType = SqlDbType.Int; paramReturnValue.SourceColumn = null; paramReturnValue.Direction = ParameterDirection.ReturnValue; cmd.Parameters.Add(paramReturnValue); cmd.Parameters.Add(paramGroupName); cmd.Parameters.Add(paramUserName); cmd.Parameters.AddWithValue("@Sys_Login", "admin"); cmd.Parameters.AddWithValue("@Sys_Password", "admin"); try { conn.Open(); rdr = cmd.ExecuteReader(); string test = (string)cmd.Parameters["@RETURN_VALUE"].Value; while (rdr.Read()) { Console.WriteLine("test : " + rdr[0]); } } catch (Exception ex) { string message = ex.Message; string caption = "MAVIS Exception"; MessageBoxButtons buttons = MessageBoxButtons.OK; MessageBox.Show( message, caption, buttons, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1); } finally { cmd.Dispose(); conn.Close(); } } } but I get nothing in SqlDataReader rdr ; is there something I am missing ? Thanks

    Read the article

  • Function pointers to member functions

    - by Jacob
    There are several duplicates of this but nobody explains why I can use a member variable to store the pointer (in FOO) but when I try it with a local variable (in the commented portion of BAR), it's illegal. Could anybody explain this? #include <iostream> using namespace std; class FOO { public: int (FOO::*fptr)(int a, int b); int add_stuff(int a, int b) { return a+b; } void call_adder(int a, int b) { fptr = &FOO::add_stuff; cout<<(this->*fptr)(a,b)<<endl; } }; class BAR { public: int add_stuff(int a, int b) { return a+b; } void call_adder(int a, int b) { //int (BAR::*fptr)(int a, int b); //fptr = &BAR::add_stuff; //cout<<(*fptr)(a,b)<<endl; } }; int main() { FOO test; test.call_adder(10,20); return 0; }

    Read the article

  • Why does a Silverlight application show a blank browser screen when created from exported template?

    - by Edward Tanguay
    I created a silverlight app (without website) named TestApp, with one TextBox: <UserControl x:Class="TestApp.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot"> <TextBlock Text="this is a test"/> </Grid> </UserControl> I press F5 and see "this is a test" in my browser (firefox). I select File | Export Template | name it TestAppTemplate and save it. I create a new silverlight app based on the above template. The MainPage.xaml has the exact same XAML as above. I press F5 and see a blank screen in my browser. I look at the HTML source of both of these and they are identical. Everything I have compared in both projects is identical. What do I have to do so that a Silverlight application which is created from my exported template does not show a blank screen? (creating a WPF application from an exported template like this works fine)

    Read the article

  • Why is my Scala function returning type Unit and not whatever is the last line?

    - by Andy
    I am trying to figure out the issue, and tried different styles that I have read on Scala, but none of them work. My code is: .... val str = "(and x y)"; def stringParse ( exp: String, pos: Int, expreshHolder: ArrayBuffer[String], follow: Int ) var b = pos; //position of where in the expression String I am currently in val temp = expreshHolder; //holder of expressions without parens var arrayCounter = follow; //just counts to make sure an empty spot in the array is there to put in the strings if(exp(b) == '(') { b = b + 1; while(exp(b) == ' '){b = b + 1} //point of this is to just skip any spaces between paren and start of expression type if(exp(b) == 'a') { temp(arrayCounter) = exp(b).toString; b = b+1; temp(arrayCounter)+exp(b).toString; b = b+1; temp(arrayCounter) + exp(b).toString; arrayCounter+=1} temp; } } val hold: ArrayBuffer[String] = stringParse(str, 0, new ArrayBuffer[String], 0); for(test <- hold) println(test); My error is: Driver.scala:35: error: type mismatch; found : Unit required: scala.collection.mutable.ArrayBuffer[String] ho = stringParse(str, 0, ho, 0); ^one error found When I add an equals sign after the arguments in the method declaration, like so: def stringParse ( exp: String, pos: Int, expreshHolder: ArrayBuffer[String], follow: Int ) ={....} It changes it to "Any". I am confused on how this works. Any ideas? Much appreciated.

    Read the article

  • Retrieve the Value of An Integer Variable

    - by Abluescarab
    This is probably easily figured out (I feel very stupid right now), but I can't find a solution anywhere, for some reason. Perhaps I'm not searching for the right thing. And maybe it's in some beginner tutorial I haven't watched. Anyway, I was wondering how to retrieve the value of an integer variable in C++? I know you can use cin.getline() for string variables, but I received an error message when I attempted that with an integer variable (and rightfully so, I know it was wrong, but I was looking for a solution). My project is a Win32 console application. What I'm trying to do is ask a user to input a number, stored in the variable n. Then I take the value of n and perform various math functions with it. In my header file, I have string, windows, iostream, stdio, math, and fstream. Do I need to add another library? There's not much more to tell. I can post my code if I must. EDIT: cout << "TEST SINE"; cout << "\nPlease enter a number.\n\n"; cin >> n; break; Here's the code I'm trying to use. Is this all I need to do? If so, how do I incorporate the variable so I can test it using sin, cos, and tan? Yet again, thanks ahead of time.

    Read the article

< Previous Page | 437 438 439 440 441 442 443 444 445 446 447 448  | Next Page >