Search Results

Search found 26055 results on 1043 pages for 'hd test'.

Page 454/1043 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • SugarmCRM REST API always returns "null"

    - by TuomasR
    I'm trying to test out the SugarCRM REST API, running latest version of CE (6.0.1). It seems that whenever I do a query, the API returns "null", nothing else. If I omit the parameters, then the API returns the API description (which the documentation says it should). I'm trying to perform a login, passing as parameter the method (login), input_type and response_type (json) and rest_data (JSON encoded parameters). The following code does the query: $api_target = "http://example.com/sugarcrm/service/v2/rest.php"; $parameters = json_encode(array( "user_auth" => array( "user_name" => "admin", "password" => md5("adminpassword"), ), "application_name" => "Test", "name_value_list" => array(), )); $postData = http_build_query(array( "method" => "login", "input_type" => "json", "response_type" => "json", "rest_data" => $parameters )); echo $parameters . "\n"; echo $postData . "\n"; echo file_get_contents($api_target, false, stream_context_create(array( "http" => array( "method" => "POST", "header" => "Content-Type: application/x-www-form-urlencoded\r\n", "content" => $postData ) ))) . "\n"; I've tried different variations of parameters and using username instead of user_name, and all provide the same result, just a response "null" and that's it.

    Read the article

  • how to append a string to next line in perl

    - by tprayush
    hi all , i have a requirement like this.. this just a sample script... $ cat test.sh #!/bin/bash perl -e ' open(IN,"addrss"); open(out,">>addrss"); @newval; while (<IN>) { @col_val=split(/:/); if ($.==1) { for($i=0;$i<=$#col_val;$i++) { print("Enter value for $col_val[$i] : "); chop($newval[$i]=<STDIN>); } $str=join(":"); $_="$str" print OUT; } else { exit 0; } } close(IN); close(OUT); ' when i run this scipt... $ ./test.sh Enter value for NAME : abc Enter value for ADDRESS : asff35 Enter value for STATE : XYZ Enter value for CITY : EIDHFF Enter value for CONTACT : 234656758 $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758 when ran it second time $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758ioret:56fgdh:ghdgh:afdfg:987643221 ## it is appended in the same line... i want it to be added to the next line..... NOTE: i want to do this by explitly using the filehandles in perl....and not with redirection operators in shell. please help me!!!

    Read the article

  • How to use WebSockets refresh multi-window for play framework 1.2.7

    - by user2468652
    My code can work.But only refresh a page of one window. If I open window1 and window2 , both open websocket connect. I keyin word "test123" in window1, click sendbutton. Only refresh window1. How to refresh window1 and window2 ? Client <script> window.onload = function() { document.getElementById('sendbutton').addEventListener('click', sendMessage,false); document.getElementById('connectbutton').addEventListener('click', connect, false); } function writeStatus(message) { var html = document.createElement("div"); html.setAttribute('class', 'message'); html.innerHTML = message; document.getElementById("status").appendChild(html); } function connect() { ws = new WebSocket("ws://localhost:9000/ws?name=test"); ws.onopen = function(evt) { writeStatus("connected"); } ws.onmessage = function(evt) { writeStatus("response: " + evt.data); } } function sendMessage() { ws.send(document.getElementById('messagefield').value); } </script> </head> <body> <button id="connectbutton">Connect</button> <input type="text" id="messagefield"/> <button id="sendbutton">Send</button> <div id="status"></div> </body> Play Framework WebSocketController public class WebSocket extends WebSocketController { public static void test(String name) { while(inbound.isOpen()) { WebSocketEvent evt = await(inbound.nextEvent()); if(evt instanceof WebSocketFrame) { WebSocketFrame frame = (WebSocketFrame)evt; System.out.println("received: " + frame.getTextData()); if(!frame.isBinary()) { if(frame.getTextData().equals("quit")) { outbound.send("Bye!"); disconnect(); } else { outbound.send("Echo: %s", frame.getTextData()); } } } } } }

    Read the article

  • NullPointerException on Activity Testing Tutorial

    - by Bendik
    Hello, I am currently trying the activity testing tutorial (Found here), and have a problem. It seems that whenever I try to call something inside the UIThread, I get a java.lang.NullPointerException. public void testSpinnerUI() { mActivity.runOnUiThread( new Runnable() { public void run() { mSpinner.requestFocus(); } }); } This gives me: Incomplete: java.lang.NullPointerException and nothing else. I have tried this on two different samples now, with the same result. I tried with a try/catch clause around the mSpinner.requestFocus() call, and it seems that mSpinner is null inside the thread. I have set it properly up with the setUp() function found in the same sample, and a quick assertNotNull( mSpinner ) shows me that mSpinner is in fact not null after the setUp() function. What can be the cause of this? EDIT; ok, some more testing has been done. It seems that the application that is being tested resets between each test. This essentially makes me have to reinstantiate all variables between each test. Is this normal?

    Read the article

  • prolog recursion

    - by AhmadAssaf
    am making a function that will send me a list of all possible elemnts .. in each iteration its giving me the last answer .. but after the recursion am only getting the last answer back .. how can i make it give back every single answer .. thank you the problem is that am trying to find all possible distributions for a list into other lists .. the code test :- bp(3,12,[7, 3, 5, 4, 6, 4, 5, 2], Answer), format("Answer = ~w\n",[Answer]). bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). addIn(_,[],Result,Result). addIn(C,[Element|Rest],[F|R],Result):- member( Members , [F|R]), sumlist( Members, Sum), sumlist([Element],ElementLength), Cap is Sum + ElementLength, (Cap =< C, append([Element], Members,New), insert( Members, New, [F|R], PartialResult), addIn(C,Rest,PartialResult,Result)). by calling test .. am getting back all the list of possible answers .. now if i tried to do something that will fail like bp(3,11,[8,2,4,6,1,8,4],Answer). it will just enter a while loop .. more over if i changed the bp(NB,C,OL,A):- addIn(C,OL,[[],[],[]],A); bp(NB,C,_,A). to and instead of Or .. i get error : ERROR: is/2: Arguments are not sufficiently instantiated appreciate the help .. Thanks alot @hardmath

    Read the article

  • ASP.NET MVC Unit Testing Controllers - Repositories

    - by Brian McCord
    This is more of an opinion seeking question, so there may not be a "right" answer, but I would welcome arguments as to why your answer is the "right" one. Given an MVC application that is using Entity Framework for the persistence engine, a repository layer, a service layer that basically defers to the repository, and a delete method on a controller that looks like this: public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } I am looking for the proper way to Unit Test this. Currently, I have a fake repository that gets used in the service, and my unit test looks like this: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange var stateService = GetService(); var stateController = new StateController( stateService ); ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; stateController = new StateController( stateService ); var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Is this overkill? Do I need to check to see if the record actually got deleted (as I already have Service and Repository tests that verify this)? Should I even use a fake repository here or would it make more sense just to mock the whole thing? The examples I'm looking at used this model of doing things, and I just copied it, but I'm really open to doing things in a "best practices" way. Thanks.

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Problem with pattern matching in ocaml

    - by Antony
    I wrote the function used to decompose a Boolean function, the problem is that the compilation I get this : "Warning 5: this function application is partial, maybe some arguments are missing." How can I solve this problem? I've set wrong the patter matching or I can not run this operation with pattern matching The code is the following: let rec decomposition state_init state prec formula = match formula with And form -> (fun () -> let f1 = List.hd form in let f2 = And(List.tl form )in let new_state = Forms (state_init,f1) in decomposition state_init new_state state f1; decomposition state_init new_state state f2; Hashtbl.add graph new_state (("",false,state :: []) , []) ; let x = Hashtbl.find graph state in let succ = state :: snd x in let (desc,last,ptrs) = fst x in Hashtbl.replace graph state ( ("And-node",last,ptrs) , succ))

    Read the article

  • JSF dynamic ui:include

    - by Ray
    In my app I have tutor and student as roles of user. And I decide that main page for both will be the same. But menu will be different for tutors and users. I made to .xhtml page tutorMenu.xhtml and student.xhtml. And want in dependecy from role include menu. For whole page I use layout and just in every page change content "content part" in ui:composition. In menu.xhtml <h:body> <ui:composition> <div class="menu_header"> <h2> <h:outputText value="#{msg['menu.title']}" /> </h2> </div> <div class="menu_content"> <?:if test="#{authenticationBean.user.role.roleId eq '2'}"> <ui:include src="/pages/content/body/student/studentMenu.xhtml"/> </?:if> <?:if test= "#{authenticationBean.user.role.roleId eq '1'}"> <ui:include src="/pages/content/body/tutor/tutorMenu.xhtml" /> </?:if> </div> </ui:composition> I know that using jstl my be not better solution but I can't find other. What is the best decision of my problem?

    Read the article

  • Multiple exports with MEF does some really heinous stuff -- why, and why is it allowed?

    - by Dave
    I have an interesting situation where I need to do something like this: [Export[typeof(ICandy1)] [Export[typeof(ICandy2)] public class Candy : ICandy2 { ... } where public interface ICandy1 { ... } public interface ICandy2 : ICandy1 { ... } I couldn't find any posts anywhere regarding using multiple [Export] attributes, so I figured, what the hell, might as well try it. At first glance, it actually seemed to work. I have a couple of methods that call into both interfaces of a Candy instance, and it was fine. However, as I started to test the app, I saw that the behavior wasn't right, and when looking at the Output window, I saw that I was getting tons of COMExceptions. I couldn't track down where they were all coming from, but they always occurred when a worker thread was sleeping. I figured that it had to be from the main thread, then, but didn't know how to debug this at all. Nothing should have been going on in the GUI, and I disabled my DispatchTimers just in case -- same thing. Even more strange than the COMExceptions was the really, really erratic behavior when stepping through code. About 30% of the time, when I single stepped, it would pop out of the method, or it would single step over two lines of code! Totally weird stuff that I am not used to seeing. The only thing that changed between working and non-working code was the introduction of MEF through my plugin loading code. So as a test, I changed my plugin assembly to only export one interface, and I hardcoded everything in the app that relied on the other (now not-implemented) interface. And now the COMExceptions are gone, and the weird debugging behavior is gone. Is this something people here have seen before? If MEF is not expected to allow a class to Export multiple interfaces, then shouldn't a CompositionException get raised when composing the parts? Can anyone explain why MEF would cause these weird problems???

    Read the article

  • Android Dev: The constructor Intent(new View.OnClickListener(){}, Class<DrinksTwitter>) is undefined

    - by Malcolm Woods Spark
    package com.android.drinksonme; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class Screen2 extends Activity { // Declare our Views, so we can access them later private EditText etUsername; private EditText etPassword; private Button btnLogin; private Button btnSignUp; private TextView lblResult; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Get the EditText and Button References etUsername = (EditText)findViewById(R.id.username); etPassword = (EditText)findViewById(R.id.password); btnLogin = (Button)findViewById(R.id.login_button); btnSignUp = (Button)findViewById(R.id.signup_button); lblResult = (TextView)findViewById(R.id.result); // Set Click Listener btnLogin.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // Check Login String username = etUsername.getText().toString(); String password = etPassword.getText().toString(); if(username.equals("test") && password.equals("test")){ final Intent i = new Intent(this, DrinksTwitter.class); //error on this line startActivity(i); // lblResult.setText("Login successful."); } else { lblResult.setText("Invalid username or password."); } } }); final Intent k = new Intent(Screen2.this, SignUp.class); btnSignUp.setOnClickListener(new OnClickListener() { public void onClick(View v) { startActivity(k); } }); } }

    Read the article

  • Compare Long values Struts2

    - by Marquinio
    Hi everyone I'm trying to compare two values using struts2 s:if tag but its not working. If I hardcode the values it works but I want it to be dynamic. The variable stringValue is of type String. The variable currentLongValue is of type Long. <s:set var="stringValue" value="order"/> <s:iterator value="listTest"> <s:set var="currentLongValue" value="value"/> <s:if test="#currentLongValue.toString() == #stringValue" > //Do something </s:if> <s:else> //Do something else </s:else> </s:iterator> For the s:if I have tried toString and also the equals(). It only works if I hardcode the values. Example: <s:if test="#currentLongValue == 1234"> Any clues? Thank you.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Is it possible to navigate to the parent node of a matched node during XSLT processing?

    - by Darin
    I'm working with an OpenXML document, processing the main document part with some XSLT. I've selected a set of nodes via <xsl:template match="w:sdt"> </xsl:template> In most cases, I simply need to replace that matched node with something else, and that works fine. BUT, in some cases, I need to replace not the w:sdt node that matched, but the closest w:p ancestor node (ie the first paragraph node that contains the sdt node). The trick is that the condition used to decide one or the other is based on data derived from the attributes of the sdt node, so I can't use a typical xslt xpath filter. I'm trying to do something like this <xsl:template match="w:sdt"> <xsl:choose> <xsl:when test={first condition}> {apply whatever templating is necessary} </xsl:when> <xsl:when test={exception condition}> <!-- select the parent of the ancestor w:p nodes and apply the appropriate templates --> <xsl:apply-templates select="(ancestor::w:p)/.." mode="backout" /> </xsl:when> </xsl:choose> </xsl:template> <!-- by using "mode", only this template will be applied to those matching nodes from the apply-templates above --> <xsl:template match="node()" mode="backout"> {CUSTOM FORMAT the node appropriately} </xsl:template> This whole concept works, BUT no matter what I've tried, It always applies the formatting from the CUSTOM FORMAT template to the w:p node, NOT it's parent node. It's almost as if you can't reference a parent from a matching node. And maybe you can't, but I haven't found any docs that say you can't Any ideas?

    Read the article

  • Google AppEngine + Local JUnit Tests + Jersey framework + Embedded Jetty

    - by xamde
    I use Google Appengine for Java (GAE/J). On top, I use the Jersey REST-framework. Now i want to run local JUnit tests. The test sets up the local GAE development environment ( http://code.google.com/appengine/docs/java/tools/localunittesting.html ), launches an embedded Jetty server, and then fires requests to the server via HTTP and checks responses. Unfortunately, the Jersey/Jetty combo spawns new threads. GAE expects only one thread to run. In the end, I end up having either no datstore inside the Jersey-resources or multiple, having different datastore. As a workaround I initialise the GAE local env only once, put it in a static variable and inside the GAE resource I add many checks (This threads has no dev env? Re-use the static one). And these checks should of course only run inside JUnit tests.. (which I asked before: "How can I find out if code is running inside a JUnit test or not?" - I'm not allowed to post the link directly here :-|)

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • How can I spec out an authlogic sessions controller using using a stub?

    - by Dave
    I want to test my User Session Controller testing that a user session is first built then saved. My UserSession class looks like this: class UserSession < Authlogic::Session::Base end The create method of my UserSessionsController looks like this: def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Successfully logged in." redirect_back_or_default administer_home_page_url else render :new end end and my controller spec looks like this: describe UserSessionsController do it "should build a new user session" do UserSession.stub!(:new).with(:email, :password) UserSession.should_receive(:new).with(:email => "[email protected]", :password => "foobar") post :create, :user_session => { :email => "[email protected]", :password => "foobar" } end end I stub out the new method but I still get the following error when I run the test: Spec::Mocks::MockExpectationError in 'UserSessionsController should build a new user session' <UserSession (class)> received :new with unexpected arguments expected: ({:password=>"foobar", :email=>"[email protected]"}) got: ({:priority_record=>nil}, nil) It's although the new method is being called on UserSession before my controller code is getting called. Calling activate_authlogic makes no difference.

    Read the article

  • Stuck trying to get Log4Net to work with Dependency Injection

    - by Pure.Krome
    I've got a simple winform test app i'm using to try some Log4Net Dependency Injection stuff. I've made a simple interface in my Services project :- public interface ILogging { void Debug(string message); // snip the other's. } Then my concrete type will be using Log4Net... public class Log4NetLogging : ILogging { private static ILog Log4Net { get { return LogManager.GetLogger( MethodBase.GetCurrentMethod().DeclaringType); } } public void Debug(string message) { if (Log4Net.IsDebugEnabled) { Log4Net.Debug(message); } } } So far so good. Nothing too hard there. Now, in a different project (and therefore namesapce), I try and use this ... public partial class Form1 : Form { public Form1() { FileInfo fileInfo = new FileInfo("Log4Net.config"); log4net.Config.XmlConfigurator.Configure(fileInfo); } private void Foo() { // This would be handled with DI, but i've not set it up // (on the constructor, in this code example). ILogging logging = new Log4NetLogging(); logging.Debug("Test message"); } } Ok .. also pretty simple. I've hardcoded the ILogging instance but that is usually dependency injected via the constructor. Anyways, when i check this line of code... return LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType); the DeclaringType type value is of the Service namespace, not the type of the Form (ie. X.Y.Z.Form1) which actually called the method. Without passing the type INTO method as another argument, is there anyway using reflection to figure out the real method that called it?

    Read the article

  • Should I pass a SqlDataReader by reference or not when passing it out to multiple threads.

    - by deroby
    Hi all, being new to c# I've run into this 'conundrum' when passing around a SqlDataReader between different threads. Without going into too much detail, the idea is to have a main thread fetching data from the database (a large recordset) and then have a helper-task run through this record by record and doing some stuff based upon the contents of this. There is no feedback to the recordset, it simply wades through until no records are left. This works fine, but given the nature of the job at hand it should be possible to have this job spread over different threads (CPUs) to maximize throughput (the order of execution is of no significance). The question then becomes, when I pass this recordset in a SqlDataReader, do I have to use ref or not ? It kind of boils down to the question : if I pass the object around without specifying ref, won't it create new copies in memory and have records processed n times ? Or, don't I risk having the record-position being moved forward while not all fields have been fully read yet ? The latter seems more like a 'data racing' issue and probably is covered by the lock()ing mechanism (or not?). My initial take on the problem was that it doesn't really hurt passing the variable using ref, yet as a colleague put it : "you only need ref when you're doing something wrong" =) Additionally using ref restricts me from applying a Using() construction too which isn't very nice either. I thus create a "basic" project that tackles the same approach but without the ref notation. Tests so far show that it works flawlessly on a Core2Duo (2cpu) using any number of threads, yet I'm still a bit wary... What do you experts think about this ? Use ref or not ? You can find the test-project here as it seems I can't upload it to this question directly ?!? ps: it's just a test-project and I'm new to c#, so please be gentle on me when breaking down the code =P

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • using a stored procedure for login in c#

    - by Jin Yim
    Hi all, If I run a store procedure with two parameter values (admin, admin) (parameters : admin, admin) I get the following message : Session_UID User_Group_Name Sys_User_Name NULLAdministratorsNTMSAdmin No rows affected. (1 row(s) returned) @RETURN_VALUE = 0 Finished running [dbo].[p_SYS_Login]. -- To get the same message in c# I used the code following : string strConnection = Settings.Default.ConnectionString; using (SqlConnection conn = new SqlConnection(strConnection)) { using (SqlCommand cmd = new SqlCommand()) { SqlDataReader rdr = null; cmd.Connection = conn; cmd.CommandText = "p_SYS_Login"; cmd.CommandType = CommandType.StoredProcedure; SqlParameter paramReturnValue = new SqlParameter(); paramReturnValue.ParameterName = "@RETURN_VALUE"; paramReturnValue.SqlDbType = SqlDbType.Int; paramReturnValue.SourceColumn = null; paramReturnValue.Direction = ParameterDirection.ReturnValue; cmd.Parameters.Add(paramReturnValue); cmd.Parameters.Add(paramGroupName); cmd.Parameters.Add(paramUserName); cmd.Parameters.AddWithValue("@Sys_Login", "admin"); cmd.Parameters.AddWithValue("@Sys_Password", "admin"); try { conn.Open(); rdr = cmd.ExecuteReader(); string test = (string)cmd.Parameters["@RETURN_VALUE"].Value; while (rdr.Read()) { Console.WriteLine("test : " + rdr[0]); } } catch (Exception ex) { string message = ex.Message; string caption = "MAVIS Exception"; MessageBoxButtons buttons = MessageBoxButtons.OK; MessageBox.Show( message, caption, buttons, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1); } finally { cmd.Dispose(); conn.Close(); } } } but I get nothing in SqlDataReader rdr ; is there something I am missing ? Thanks

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • CSSRules is empty

    - by Stephanie
    I have this very simple HTML page, and I'm trying to get the CSSRules of #poulet, but when I'm accessing the documents.styleSheets[0].cssRules I get this error in Chrome v5.0.375.55: Uncaught TypeError: Cannot read property 'length' of null Here is what my code looks like: //HTML FILE window.onload = function(){ var test = findKeyframesRule('poulet'); alert(test); } <div id="poulet"> allo </div> //JS FILE function findKeyframesRule(rule) { var ss = document.styleSheets; for (var i = 0; i < ss.length; ++i) { for (var j = 0; j < ss[i].cssRules.length; ++j) { if (ss[i].cssRules[j].type == window.CSSRule.WEBKIT_KEYFRAMES_RULE && ss[i].cssRules[j].name == rule) return ss[i].cssRules[j]; } } return null; } //CSS FILE html, body{ background: #cccccc; } #poulet{ border: 10px solid pink; } The files can be found here. I really need help on this one, please!!! D:

    Read the article

  • Can't find netbooted for Kerrighed pxe boot with Ubuntu Lucid Server

    - by Pengin
    I'm following installtion guides for pxe booting and kerrighed. I can't find the package nfsbooted for Ubuntu 10.04. Where did it go? Context: At work I have access to 8 mini-ITX PCs and am trying to build a cluster. My plans include trying Condor, GridGain, Hadoop, and recently Kerrighed has caught my eye. (I reaslise these are all for different kinds of things, I'm just evaluating). Ideally, I'd like to have all the nodes network boot from a single server, since that seems so much easier to manage, plus I can 'borrow' additional PCs for a while without touching their HD. I've been getting on great with Ubuntu Lucid Server (10.04), trying to follow the only guides I can find to get pxe booting (and ultimately kerrighed) to work. This guide is for Ubuntu 8.04 and this one is for Debian. They both refer to a package I can't seem to find, nfsbooted. Has this package been replaced? Am I doing something daft?

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >