Search Results

Search found 12404 results on 497 pages for 'native types'.

Page 50/497 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How can I find out if two arguments are instances of the same, but unknown class?

    - by Ingmar
    Let us say we have a method which accepts two arguments o1 and o2 of type Object and returns a boolean value. I want this method to return true only when the arguments are instances of the same class, e.g.: foo(new Integer(4),new Integer(5)); Should return true, however: foo(new SomeClass(), new SubtypeSomeClass()); should return false and also: foo(new Integer(3),"zoo"); should return false. I believe one way is to compare the fully qualified class names: public boolean foo(Object o1, Object o2){ Class<? extends Object> c1 = o1.getClass(); Class<? extends Object> c2 = o2.getClass(); if(c1.getName().equals(c2.getName()){ return true;} return false; } An alternative conditional statement would be : if (c1.isAssignableFrom(c2) && c2.isAssignableFrom(c1)){ return true; } The latter alternative is rather slow. Are there other alternatives to this problem?

    Read the article

  • How to parse a string into a nullable int in C# (.NET 3.5)

    - by Glenn Slaven
    I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed. I was kind of hoping that this would work int? val = stringVal as int?; But that won't work, so the way I'm doing it now is I've written this extension method public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } Is there a better way of doing this? EDIT: Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?

    Read the article

  • How to perform Linq select new with datetime in SQL 2008

    - by kd7iwp
    In our C# code I recently changed a line from inside a linq-to-sql select new query as follows: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.Year.ToString() + "-" + p.OrderDate.Value.Month.ToString() + "-" + p.OrderDate.Value.Day.ToString() : "") To: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") The change makes the line smaller and cleaner. It also works fine with our SQL 2008 database in our development environment. However, when the code deployed to our production environment which uses SQL 2005 I received an exception stating: Nullable Type must have a value. For further analysis I copied (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") into a string (outside of a Linq statement) and had no problems at all, so it only causes an in issue inside my Linq. Is this problem just something to do with SQL 2005 using different date formats than from SQL 2008? Here's more of the Linq: dt = FilteredOrders.Where(x => x != null).Select(p => new { Order = p.OrderId, link = "/order/" + p.OrderId.ToString(), StudentId = (p.PersonId.HasValue ? p.PersonId.Value : 0), FirstName = p.IdentifierAccount.Person.FirstName, LastName = p.IdentifierAccount.Person.LastName, DeliverBy = p.DeliverBy, OrderDate = p.OrderDate.HasValue ? p.OrderDate.Value.Date.ToString("yyyy-mm-dd") : ""}).ToDataTable(); This is selecting from a List of Order objects. The FilteredOrders list is from another linq-to-sql query and I call .AsEnumerable on it before giving it to this particular select new query. Doing this in regular code works fine: if (o.OrderDate.HasValue) tempString += " " + o.OrderDate.Value.Date.ToString("yyyy-mm-dd");

    Read the article

  • How to get the real type of a value inside string?

    - by CuSS
    I was searching here on StackOverflow about converting string to the real value and i didn't found. I need a function like "gettype" that does something like the result above, but i can't do it all :s gettypefromstring("1.234"); //returns (doble)1,234; gettypefromstring("1234"); //returns (int)1234; gettypefromstring("a"); //returns (char)a; gettypefromstring("true"); //returns (bool)true; gettypefromstring("khtdf"); //returns (string)"khtdf"; Thanks to all :)

    Read the article

  • How can I override TryParse?

    - by cyclotis04
    I would like to override bool's TryParse method to accept "yes" and "no." I know the method I want to use (below) but I don't know how to override bool's method. ... bool TryParse(string value, out bool result) { if (value == "yes") { result = true; return true; } else if (value == "no") { result = false; return true; } else { return bool.TryParse(value, result); } }

    Read the article

  • Parsing boolean from configuration section in web.config

    - by Bloopy
    I have a custom configuration section in my web.config. One of my classes is grabbing from this: <myConfigSection LabelVisible="" TitleVisible="true"/> I have things working for parsing if I have true or false, however if the attribute is blank I am getting errors. When the config section tries to map the class to the configuration section I get an error of "not a valid value for bool" on the 'LabelVisible' part. How can I parse "" as false in my myConfigSection class? I have tried this: [ConfigurationProperty("labelsVisible", DefaultValue = true, IsRequired = false)] public bool? LabelsVisible { get { return (bool?)this["labelsVisible"]; } But when I try and use what is returned like so: graph.Label.Visible = myConfigSection.LabelsVisible; I get an error of: 'Cannot implicitly convert type 'bool?' to 'bool'. An explicit conversion exists (are you missing a cast?) Thanks for any suggestions!

    Read the article

  • how do i ininitialize a float to it's max/min value?

    - by Faken
    How do i hard code an absolute maximum or minimum value for a float or double? I want to search out the max/min of an array by simply iterating through and catching the largest. There are also positive and negative infinity for floats, should i use those instead? if so, how do i denote that in my code?

    Read the article

  • How To Test if a Type is Anonymous?

    - by DaveDev
    Hi Guys I have the following method which serialises an object to a HTML tag. I only want to do this though if the type isn't Anonymous. private void MergeTypeDataToTag(object typeData) { if (typeData != null) { Type elementType = typeData.GetType(); if (/* elementType != AnonymousType */) { _tag.Attributes.Add("class", elementType.Name); } // do some more stuff } } Can somebody show me how to achieve this? Thanks Dave

    Read the article

  • Why doesn't C# do "simple" type inference on generics?

    - by Ken Birman
    Just curious: sure, we all know that the general case of type inference for generics is undecidable. And so C# won't do any kind of subtyping at all: if Foo<T> is a generic, Foo<int> isn't a subtype of Foo<T>, or Foo<Object> or of anything else you might cook up. And sure, we all hack around this with ugly interface or abstract class definitions. But... if you can't beat the general problem, why not just limit the solution to cases that are easy. For example, in my list above, it is OBVIOUS that Foo<int> is a subtype of Foo<T> and it would be trivial to check. Same for checking against Foo<Object>. So is there some other deep horror that would creep forth from the abyss if they were to just say, aw shucks, we'll do what we can? Or is this just some sort of religious purity on the part of the language guys at Microsoft?

    Read the article

  • Please help with passing multidimensional arrays

    - by nn
    Hi, I'm writing a simple test program to pass multidimensional arrays. I've been struggling to get the signature of the callee function. void p(int (*s)[100], int n) { ... } In the code I have: int s1[10][100], s2[10][1000]; p(s1, 100); This code appears to work, but it's not what I intended. I want to the function p to be oblivious whether the range of values is 100 or 1000, but it should know there are 10 pointers. I tried as a first attempt: void p(int (*s)[10], int n) // n = # elements in the range of the array and also: void p(int **s, int n) // n = # of elements in the range of the array But to no avail can I seem to get this correct. I don't want to hardcode the 100 or 1000, but instead pass it in, but there will always be 10 arrays. Obviously, I want to avoid having to declare the function: void p(int *s1, int *s2, int *s3, ..., int *s10, int n) FYI, I'm looking at the answers to a similar question but still confused.

    Read the article

  • Haskell Type error

    - by Jon
    I am getting a Couldn't match expected type error on this code and am not sure why. Would appreciate it if someone could point me in the right direction as to fixing it. import qualified Data.ByteString.Lazy as S import Data.Binary.Get import Data.Word getBinary :: Get Word16 getBinary = do a <- getWord16be "Test.class" return (a) main :: IO () main = do contents <- S.getContents print getBinary contents Specifically it cannot match expected type 'S.ByteString - IO ()' to inferred type 'IO ()'

    Read the article

  • Use string value to create new instance

    - by Brian David Berman
    I have a few classes: SomeClass1, SomeClass2. How can I create a new instance of one of these classes by using the class name from a string? Normally, I would do: var someClass1 = new SomeClass1(); How can I create this instance from the following: var className = "SomeClass1"; I am assuming I should use Type.GetType() or something but I can't figure it out. Thanks.

    Read the article

  • Is it good practice to use std::size_t all over the place?

    - by dehmann
    I have a lot of constants in my code that are unsigned numbers, e.g. counters, frequency cutoffs, lengths, etc. I started using std::size_t for all of these, instead of int or unsigned int. Is that the right thing to do? I started it because the STL containers use it for their sizes, it's used for string position, etc.

    Read the article

  • testing .mobile mime format with capybara / rspec

    - by Chris Beck
    For detecting and responding to mobile user agents, I'm using Mime::Type.register_alias "text/html", :mobile and the approach I'm wondering what is the best approach to test with capybara. This article suggests setting up an iphone driver with Capybara.register_driver :iphone do |app| http://blog.plataformatec.com.br/2011/03/configuring-user-agents-with-capybara-selenium-webdriver/ but I'd like a more flexible approach where the mime type is set via the url extension localhost/index.mobile and where I can do this visit user_path( format: :mobile) Rails understands the extension and sets the format in the params hash, but how do I get the url helper methods to add that to all urls as a file extension?

    Read the article

  • I'm using the correct content type & Headers so Why is FireFox saving Zip Files without extensions

    - by The_AlienCoder
    Users on my site have the option to download all the photos in an album as a zip file.The Zip file is dynamically created and saved to Response.OutPutStream to be detected as a file download on the user's browser. Here is the Header and Content-type I am outputing context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); context.Response.ContentType = "application/x-zip-compressed"; ..Well everything works fine with every browser except FireFox. Although Firefox correctly detects the download as a Zip file, It saves the file without the .zip extension. I thought adding this header context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); ..is supposed to force FF to save the extension. I believe I am following the correct protocol so why is FF behaving this way and how do I fix this?

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

  • Implicit parameter in Scalaz

    - by Thomas Jung
    I try to find out why the call Ø in scalaz.ListW.<^> works def <^>[B: Zero](f: NonEmptyList[A] => B): B = value match { case Nil => Ø case h :: t => f(Scalaz.nel(h, t)) } My minimal theory is: trait X[T]{ def y : T } object X{ implicit object IntX extends X[Int]{ def y = 42 } implicit object StringX extends X[String]{ def y = "y" } } trait Xs{ def ys[T](implicit x : X[T]) = x.y } class A extends Xs{ def z[B](implicit x : X[B]) : B = ys //the call Ø } Which produces: import X._ scala> new A().z[Int] res0: Int = 42 scala> new A().z[String] res1: String = y Is this valid? Can I achieve the same result with fewer steps?

    Read the article

  • how do I initialize a float to its max/min value?

    - by Faken
    How do I hard code an absolute maximum or minimum value for a float or double? I want to search out the max/min of an array by simply iterating through and catching the largest. There are also positive and negative infinity for floats, should I use those instead? If so, how do I denote that in my code?

    Read the article

  • Undetermined type conversion in VB.NET 2008

    - by user337501
    I figured this would be a quick google, but extensive searching hasnt yielded any results. Everything about type conversion seems to dance around this concept. I want to get the type of variable "a", and make a new variable named "b" of that type. Otherwise I could have "a" as a type already declared and "b" simply as an Object, then try to cast "b" to the type of "a". Dim a As Integer Dim b As Whatever a Is OR TryCast(b, Whatever a Is) I would also like to make the conversion using a variable representation of the type, but cant find info on how to do that either. Sorta like: Dim a As Integer Dim b As Object Dim t As Type t = a.GetType() TryCast(b, t) Realizing I'm completely misusing TryCast here, I'm mostly trying to get my goal across. I figured it would be an easy quick thing to do but I cant really find any specific info on it. Any ideas?

    Read the article

  • Perfect hash in Scala.

    - by Lukasz Lew
    I have some class C: class C (...) { ... } I want to use it to index an efficient map. The most efficient map is an Array. So I add a "global" "static" counter in companion object to give each object unique id: object C { var id_counter = 0 } In primary constructor of C, with each creation of C I want to remember global counter value and increase it. Question 1: How to do it? Now I can use id in C objects as perfect hash to index array. But array does not preserve type information like map would, that a given array is indexed by C's id. Question 2: Is it possible to have it with type safety?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >