Search Results

Search found 27151 results on 1087 pages for 'end to end'.

Page 352/1087 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • How do I mock a method with an open array parameter in PascalMock?

    - by Oliver Giesen
    I'm currently in the process of getting started with unit testing and mocking for good and I stumbled over the following method that I can't seem to fabricate a working mock implementation for: function GetInstance(const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID = CID_DEFAULT): Boolean; (TImplContextID is just an alias for Integer) I thought it would have to look something like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; begin Result := AddCall('GetInstance') .WithParams([@AIID, AContextID]) .ReturnsOutParams([AInstance]) .ReturnValue; end; But the compiler complains about the .ReturnsOutParams([AInstance]) saying "Bad argument type in variable type array constructor.". Also I haven't found a way to specify the open array parameter AArgs at all. Also, is using the @-notation for the TGUID-typed parameter the right way to go? Is it possible to mock this method with the current version of PascalMock at all? Update: I now realize I got the purpose of ReturnsOutParams completely wrong: It's intended to be used for populating the values to be returned when defining the expectations rather than for mocking the call itself. I now think the correct syntax for mocking the out parameter would probably have to look more like this: function TImplementationProviderMock.GetInstance( const AIID: TGUID; out AInstance; const AArgs: array of const; const AContextID: TImplContextID): Boolean; var lCall: TMockMethod; begin lCall := AddCall('GetInstance').WithParams([@AIID, AContextID]); Pointer(AInstance) := lCall.OutParams[0]; Result := lCall.ReturnValue; end; The questions that remain are how to mock the open array parameter AArgs and whether passing the TGUID argument (i.e. a value type) by address will work out...

    Read the article

  • Introduction to vectorizing in MATLAB - any good tutorials?

    - by Gacek
    I'm looking for any good tutorials on vectorizing (loops) in MATLAB. I have quite simple algorithm, but it uses two for loops. I know that it should be simple to vectorize it and I would like to learn how to do it instead of asking you for the solution. But to let you know what problem I have, so you would be able to suggest best tutorials that are showing how to solve similar problems, here's the outline of my problem: B = zeros(size(A)); % //A is a given matrix. for i=1:size(A,1) for j=1:size(A,2) H = ... %// take some surrounding elements of the element at position (i,j) (i.e. using mask 3x3 elements) B(i,j) = computeSth(H); %// compute something on selected elements and place it in B end end So, I'm NOT asking for the solution. I'm asking for a good tutorials, examples of vectorizing loops in MATLAB. I would like to learn how to do it and do it on my own.

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • using load instead of other I/O command

    - by Amadou
    How can I modify this program using load-ascii command to read (x,y)? n=0; sum_x = 0; sum_y = 0; sum_x2 = 0; sum_xy = 0; disp('This program performs a least-squares fit of an'); disp('input data set to a straight line. Enter the name'); disp('of the file containing the input (x,y) pairs: '); filename = input(' ','s'); [fid,msg] = fopen(filename,'rt'); if fid<0 disp(msg); else [in,count]=fscanf(fid, '%g %g',2); while ~feof(fid) x=in(1); y=in(2); n=n+1; sum_x=sum_x+x; sum_y=sum_y+y; sum_x2=sum_x2+x.^2; sum_xy=sum_xy+x*y; [in,count] = fscanf(fid, '%f',[1 2]); end fclose(fid); x_bar = sum_x / n; y_bar = sum_y / n; slope = (sum_xy - sum_x*y_bar) / (sum_x2 - sum_x*x_bar); y_int = y_bar - slope * x_bar; fprintf('Regression coefficients for the least-squares line:\n'); fprintf('Slope (m) =%12.3f\n',slope); fprintf('Intercept (b) =%12.3f\n',y_int); fprintf('No of points =%12d\n',n); end

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • LaTex: how does the include-command work?

    - by HH
    I supposed the include-command copy-pastes code in the compilation, it is wrong because the code stopped working. Please, see the middle part in the code. I only copy-pasted the code to the file and added the include-command. $ cat results/frames.tex 10.31 & 8.50 & 7.40 \\ 10.34 & 8.53 & 7.81 \\ 8.22 & 8.62 & 7.78 \\ 10.16 & 8.53 & 7.44 \\ 10.41 & 8.38 & 7.63 \\ 10.38 & 8.57 & 8.03 \\ 10.13 & 8.66 & 7.41 \\ 8.50 & 8.60 & 7.15 \\ 10.41 & 8.63 & 7.21 \\ 8.53 & 8.53 & 7.12 \\ Latex code \begin{table} \begin{tabular}{ | l | m | r |} \hline $t$ / s & $d_{1}$ / s & $d_{2}$ / s \\ $\Delta h = 0,01 s$ & $\Delta d = 0,01 s$ & $\Delta d = 0,01 s$ \\ \hline % I JUST COPIED THE CODE from here to the file, included. % It stopped working, why? \include{results/frames.tex} \hline $\pi (\frac{d_{1}}{2} - \frac{d_{2}}{2})$ & $2 \pi R h$ & $2 \pi r h$ \\ \hline \end{tabular} \end{table}

    Read the article

  • PIC C - Sending 200 values over USB, but it only sends 25 of them...

    - by Adam
    I have a PIC18F4455 microcontroller which I am trying to use to send 200 values over USB. Basically I am using a for loop and a printf statement to print the values to the usb output stream. However, when the code executes I see in my serial port monitor that it is only sending the first 25 values, then stopping. My PIC C code is below. It will send out the 25th value (and the comma), but not send anything after and not send a newline character. I'm trying to get it to send all the values, then a newline character at the end. I am sending them all as characters because I can convert them on the PC end of it. //print #3 for (i = 0; i <= 199; i++){if (data[i]=='\0' || data[i]=='\n'){data[i]++;}} for (i = 0; i < 199; i++){printf(usb_cdc_putc, "%c,", data[i]);} printf(usb_cdc_putc, "%c\n", data[199]);

    Read the article

  • Is it possible to include a Sexpr before the expression has been evaluated in Sweave / R ?

    - by PaulHurleyuk
    Hello, I'm writing a Sweave document, and I want to include a small section that details the R and package versions, platofrms and how long ti took to evalute the doucment, however, I want to put this in the middle of the document ! I was using a \Sexpr{elapsed} to do this (which didn't work), but thought if I put the code printing elapsed in a chunk that evaluates at the end, I could then include the chunk half way through, which also fails. My document looks something like this % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] @ Text and Sweave Code in here % This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx sec to process. <<>>= <<elapsed>> @ More text and Sweave code in here <<label=bye, include=FALSE, echo=FALSE>>= odbcCloseAll() endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= print(elapsedtime) @ \end{document} But this doesn't seem to work (amazingly !) Does anyone know how I could do this ? Thanks Paul.

    Read the article

  • Why is Delphi unable to infer the type for a parameter TEnumerable<T>?

    - by deepc
    Consider the following declaration of a generic utility class in Delphi 2010: TEnumerableUtils = class public class function InferenceTest<T>(Param: T): T; class function Count<T>(Enumerable: TEnumerable<T>): Integer; overload; class function Count<T>(Enumerable: TEnumerable<T>; Filter: TPredicate<T>): Integer; overload; end; Somehow the compiler type inference seems to have problems here: var I: Integer; L: TList<Integer>; begin TEnumerableUtils.InferenceTest(I); // no problem here TEnumerableUtils.Count(L); // does not compile: E2250 There is no overloaded version of 'Count' that can be called with these arguments TEnumerableUtils.Count<Integer>(L); // compiles fine end; The first call works as expected and T is correctly inferred as Integer. The second call does not work, unless I also add <Integer -- then it works, as can be seen in the third call. Am I doing something wrong or is the type inference in Delphi just not supporting this (I don't think it is a problem in Java which is why expected it to work in Delphi, too).

    Read the article

  • Why would this Lua optimization hack help?

    - by Ian Boyd
    i'm looking over a document that describes various techniques to improve performance of Lua script code, and i'm shocked that such tricks would be required. (Although i'm quoting Lua, i've seen similar hacks in Javascript). Why would this optimization be required: For instance, the code for i = 1, 1000000 do local x = math.sin(i) end runs 30% slower than this one: local sin = math.sin for i = 1, 1000000 do local x = sin(i) end They're re-declaring sin function locally. Why would this be helpful? It's the job of the compiler to do that anyway. Why is the programmer having to do the compiler's job? i've seen similar things in Javascript; and so obviously there must be a very good reason why the interpreting compiler isn't doing its job. What is it? i see it repeatedly in the Lua environment i'm fiddling in; people redeclaring variables as local: local strfind = strfind local strlen = strlen local gsub = gsub local pairs = pairs local ipairs = ipairs local type = type local tinsert = tinsert local tremove = tremove local unpack = unpack local max = max local min = min local floor = floor local ceil = ceil local loadstring = loadstring local tostring = tostring local setmetatable = setmetatable local getmetatable = getmetatable local format = format local sin = math.sin What is going on here that people have to do the work of the compiler? Is the compiler confused by how to find format? Why is this an issue that a programmer has to deal with? Why would this not have been taken care of in 1993? i also seem to have hit a logical paradox: Optimizatin should not be done without profiling Lua has no ability to be profiled Lua should not be optimized

    Read the article

  • using an alternative string quotation syntax in python

    - by Cawas
    Just wondering... I find using escape characters too distracting. I'd rather do something like this: print ^'Let's begin and end with sets of unlikely 2 chars and bingo!'^ Let's begin and end with sets of unlikely 2 chars and bingo! Note the ' inside the string, and how this syntax would have no issue with it, or whatever else inside for basically all cases. Too bad markdown can't properly colorize it (yet), so I decided to <pre> it. Sure, the ^ could be any other char, I'm not sure what would look/work better. That sounds good enough to me, tho. Probably some other language already have a similar solution. And, just maybe, Python already have such a feature and I overlooked it. I hope this is the case. But if it isn't, would it be too hard to, somehow, change Python's interpreter and be able to select an arbitrary (or even standardized) syntax for notating the strings? I realize there are many ways to change statements and the whole syntax in general by using pre-compilators, but this is far more specific. And going any of those routes is what I call "too hard". I'm not really needing to do this so, again, I'm just wondering.

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • ORACLE: can we create global temp tables or any tables in stored proc?

    - by mrp
    Hi, below is the stored proc I wrote: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; when i executed it , i get an error message mentioning: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; Error at line 1 ORA-00955: name is already used by an existing object Script Terminated on line 1. I tried to drop the TEMP_TRAN and it says table doesn't exist. So there is no TEMP_TRAN table existed in system. why am I getting this error? I am using TOAD to create this stored proc. Any help would be highly appreciated.

    Read the article

  • Why does this MySQL function return null?

    - by Shore
    Description: the query actually run have 4 results returned,as can be see from below, what I did is just concate the items then return, but unexpectedly,it's null. I think the code is self-explanatory: DELIMITER | DROP FUNCTION IF EXISTS get_idiscussion_ask| CREATE FUNCTION get_idiscussion_ask(iask_id INT UNSIGNED) RETURNS TEXT DETERMINISTIC BEGIN DECLARE done INT DEFAULT 0; DECLARE body varchar(600); DECLARE created DATETIME; DECLARE anonymous TINYINT(1); DECLARE screen_name varchar(64); DECLARE result TEXT; DECLARE cur1 CURSOR FOR SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=iask_id; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done = 1; SET result = ''; OPEN cur1; REPEAT FETCH cur1 INTO body, created, anonymous, screen_name; SET result = CONCAT(result,'<comment><body><![CDATA[',body,']]></body>','<replier>',if(screen_name is not null and !anonymous,screen_name,''),'</replier>','<created>',created,'</created></comment>'); UNTIL done END REPEAT; CLOSE cur1; RETURN result; END | DELIMITER ; mysql> DELIMITER ; mysql> select get_idiscussion_ask(1); +------------------------+ | get_idiscussion_ask(1) | +------------------------+ | NULL | +------------------------+ 1 row in set (0.01 sec) mysql> SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=1; +------+---------------------+-----------+-------------+ | body | created | anonymous | screen_name | +------+---------------------+-----------+-------------+ | haha | 2009-05-27 04:57:51 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:53 | 0 | NULL | +------+---------------------+-----------+-------------+ 4 rows in set (0.00 sec) For those who don't think the code is self-explanatory: Why the function returns NULL?

    Read the article

  • When downloading a file using FileStream, why does page error message refers to aspx page name, not

    - by StuperUser
    After building a filepath (path, below) in a string (I am aware of Path in System.IO, but am using someone else's code and do not have the opportunity to refactor it to use Path). I am using a FileStream to deliver the file to the user (see below): FileStream myStream = new FileStream(path, FileMode.Open, FileAccess.Read); long fileSize = myStream.Length; byte[] Buffer = new byte[(int)fileSize + 1]; myStream.Read(Buffer, 0, (int)myStream.Length); myStream.Close(); Response.ContentType = "application/csv"; Response.AddHeader("content-disposition", "attachment; filename=" + filename); Response.BinaryWrite(Buffer); Response.Flush(); Response.End(); I have seen from: http://stackoverflow.com/questions/736301/asp-net-how-to-stream-file-to-user reasons to avoid use of Response.End() and Response.Close(). I have also seen several articles about different ways to transmit files and have diagnosed and found a solution to the problem (https and http headers) with a colleague. However, the error message that was being displayed was not about access to the file at path, but the aspx file. Edit: Error message is: Internet Explorer cannot download MyPage.aspx from server.domain.tld Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. (page name and address anonymised) Why is this? Is it due to the contents of the file coming from the HTTP response .Flush() method rather than a file being accessed at its address?

    Read the article

  • Automaticaly update ActiveRecord object

    - by Aleksandr Koss
    I have same models: class Father < ActiveRecord::Base has_many :children end class Child < ActiveRecord::Base  belongs_to :father end Then do something like that: $ script/console test Loading test environment (Rails 2.3.5) >> @f1 = Father.create :test => "Father" => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2 = Father.find :first => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f1 == @f2 => true >> @f1.children => [] >> @f2.children => [] >> @f1.children.create :test => "Child1" => #<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15"> >> @f1.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] >> @f2.children => [] >> @f2.reload => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] As you see rails cache @f2 object. To get actual data we should call reload. There is a way to automatically reload @f2 after children update without calling method "reload"?

    Read the article

  • Is it possible to restrict instantiation of an object to only one other (parent) object in VB.NET?

    - by Casey
    VB 2008 .NET 3.5 Suppose we have two classes, Order and OrderItem, that represent some type of online ordering system. OrderItem represents a single line item in an Order. One Order can contain multiple OrderItems, in the form of a List(of OrderItem). Public Class Order Public Property MyOrderItems() as List(of OrderItem) End Property End Class It makes sense that an OrderItem should not exist without an Order. In other words, an OrderItem class should not be able to be instantiated on its own, it should be dependent on an Order class to contain it and instantiate it. However, the OrderItem should be public in scope so that it's properties are accessible to other objects. So, the requirements for OrderItem are: Can not be instantiated as a stand alone object; requires Order to exist. Must be public so that any other object can access it's properties/methods through the Order object. e.g. Order.OrderItem(0).ProductID. OrderItem should be able to be passed to other subs/functions that will operate on it. How can I achieve these goals? Is there a better approach?

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • [Ruby] Object assignment and pointers

    - by Jergason
    I am a little confused about object assignment and pointers in Ruby, and coded up this snippet to test my assumptions. class Foo attr_accessor :one, :two def initialize(one, two) @one = one @two = two end end bar = Foo.new(1, 2) beans = bar puts bar puts beans beans.one = 2 puts bar puts beans puts beans.one puts bar.one I had assumed that when I assigned bar to beans, it would create a copy of the object, and modifying one would not affect the other. Alas, the output shows otherwise. ^_^[jergason:~]$ ruby test.rb #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> 2 2 I believe that the numbers have something to do with the address of the object, and they are the same for both beans and bar, and when I modify beans, bar gets changed as well, which is not what I had expected. It appears that I am only creating a pointer to the object, not a copy of it. What do I need to do to copy the object on assignment, instead of creating a pointer? Tests with the Array class shows some strange behavior as well. foo = [0, 1, 2, 3, 4, 5] baz = foo puts "foo is #{foo}" puts "baz is #{baz}" foo.pop puts "foo is #{foo}" puts "baz is #{baz}" foo += ["a hill of beans is a wonderful thing"] puts "foo is #{foo}" puts "baz is #{baz}" This produces the following wonky output: foo is 012345 baz is 012345 foo is 01234 baz is 01234 foo is 01234a hill of beans is a wonderful thing baz is 01234 This blows my mind. Calling pop on foo affects baz as well, so it isn't a copy, but concatenating something onto foo only affects foo, and not baz. So when am I dealing with the original object, and when am I dealing with a copy? In my own classes, how can I make sure that assignment copies, and doesn't make pointers? Help this confused guy out.

    Read the article

  • VB dataset issue

    - by Gabriel
    Hi. The idea was to create a message box that stores my user name, message, and post datetime into the database as messages are sent. Soon came to realise, what if the user changed his name? So I decided to use the user id (icn) to identify the message poster instead. However, my chunk of codes keep giving me the same error. Says that there are no rows in the dataset ds2. I've tried my Query on my SQL and it works perfectly so I really really need help to spot the error in my chunk of codes here. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim name As String Dim icn As String Dim message As String Dim time As String Dim tags As String = "" Dim strConn As System.Configuration.ConnectionStringSettings strConn = ConfigurationManager.ConnectionStrings("ufadb") Dim conn As SqlConnection = New SqlConnection(strConn.ToString()) Dim cmd As New SqlCommand("Select * From Message", conn) Dim daMessages As SqlDataAdapter = New SqlDataAdapter(cmd) Dim ds As New DataSet cmd.Connection.Open() daMessages.Fill(ds, "Messages") cmd.Connection.Close() If ds.Tables("Messages").Rows.Count > 0 Then Dim n As Integer = ds.Tables("Messages").Rows.Count Dim i As Integer For i = 0 To n - 1 icn = ds.Tables("Messages").Rows(i).Item("icn") Dim cmd2 As New SqlCommand("SELECT name FROM Member inner join Message ON Member.icn = Message.icn WHERE message.icn = @icn", conn) cmd2.Parameters.AddWithValue("@icn", icn) Dim daName As SqlDataAdapter = New SqlDataAdapter(cmd2) Dim ds2 As New DataSet cmd2.Connection.Open() daName.Fill(ds2, "PosterName") cmd2.Connection.Close() name = ds2.Tables("PosterName").Rows(0).Item("name") message = ds.Tables("Messages").Rows(i).Item("message") time = ds.Tables("Messages").Rows(i).Item("timePosted") tags = time + vbCrLf + name + ": " + vbCrLf + message + vbCrLf + tags Next txtBoard.Text = tags Else txtBoard.Text = "nothing to display" End If End Sub Help will be very much appreciated as I have been on this simple problem for 2 days.

    Read the article

  • creating matrix with probabilities

    - by John Chan
    Hi all, I want to generate a matrix of NxN to test some code that I have where each row contains floats as the elements and has to add up to 1 (i.e. a row with a set of probabilities). Where it gets tricky is that I want to make sure that randomly some of the elements should be 0 (in fact most of the elements should be 0 except for some random ones to be the probabilities). I need the probabilities to be 1/m where m is the number of elements that are not 0 within a single row. I tried to think of ways to output this, but essentially I would need this stored in a C++ array. So even if I output to a file I would still have the issue of not having it in array as I need it. At the end of it all I need that array because I want to generate a Market Matrix file. I found an implementation in C++ to take an array and convert it to the market matrix file, so this is what I am basing my findings on. My input for the rest of the code takes in this market matrix file so I need that to be the primary form of output. The language does not matter, I just want to generate the file at the end (I found a way mmwrite and mmread in python as well) Please help, I am stuck and not really sure how to implement this.

    Read the article

  • Liquid Layout: 100% max-width img not applied - why?

    - by MEM
    I'm totally new to this liquid layout stuff. I've notice, as most of us, that while most of my layout components "liquify", images, unfortunately, don't. So I'm trying to use the max-width: 100% on images as suggested on several places. However, and despite the definition of max-width and min-height of the img container, the img don't scale. Sample code: CSS img { max-width: 100%; } article { float: left; margin: 30px 1%; max-width: 31%; min-height: 350px; } HTML <article> <header> <h2>some header</h2> </header> <img src="/images/thumb1.jpg" alt="thumb"> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Proin vel ante a orci tempus eleifend.</p> </article> Please have a look on the following link: http://tinyurl.com/d849f8x If you see it on a wide resolution, you will notice that the "kid image", for example, don't scale. Any clue about what could the issue be, why does that image not scale? Test case: Browsers: Firefox 15.0 / Chrome 21.0 IOS: MAC OS X Lion - 10.7.3 Resolution: 1920x1200 What I get: I get an image that doesn't scale until the end of it's container. The img width won't fit the article element that contains it. What I do expect: I expect the image to enlarge, until it reaches the end it's container. Visually, I'm expecting the image to be as wide as the paragraph immediately below, in a way that, the right side of the image stays vertically aligned with the right side of the paragraph below.

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • TransactionScope and Transactions

    - by Mike
    In my C# code I am using TransactionScope because I was told not to rely that my sql programmers will always use transactions and we are responsible and yada yada. Having said that It looks like TransactionScope object Rolls back before the SqlTransaction? Is that possible and if so what is the correct methodology for wrapping a TransactionScope in a transaction. Here is the sql test CREATE PROC ThrowError AS BEGIN TRANSACTION --SqlTransaction SELECT 1/0 IF @@ERROR<> 0 BEGIN ROLLBACK TRANSACTION --SqlTransaction RETURN -1 END ELSE BEGIN COMMIT TRANSACTION --SqlTransaction RETURN 0 END go DECLARE @RESULT INT EXEC @RESULT = ThrowError SELECT @RESULT And if I run this I get just the divide by 0 and return -1 Call from the C# code I get an extra error message Divide by zero error encountered. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION tatement is missing. Previous count = 1, current count = 0. If I give the sql transaction a name then Cannot roll back SqlTransaction. No transaction or savepoint of that name was found. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 2. some times it seems the count goes up, until the app completely exits The c# is just using (TransactionScope scope = new TransactionScope()) { ... Execute Sql scope.Commit() }

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >