Search Results

Search found 11073 results on 443 pages for 'higher kinded types'.

Page 41/443 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Convert text file to dictionary or anonymous type object

    - by Robert Harvey
    I have a text file that looks like this: adapter 1: LPe11002 Factory IEEE: 10000000 C97A83FC Non-Volatile WWPN: 10000000 C93D6A8A , WWNN: 20000000 C93D6A8A adapter 2: LPe11002 Factory IEEE: 10000000 C97A83FD Non-Volatile WWPN: 10000000 C93D6A8B , WWNN: 20000000 C93D6A8B Is there a way to get this information into an anonymous type or dictionary object? The final anonymous type might look something like this, if it were composed in C# by hand: new { adapter1 = new { FactoryIEEE = "10000000 C97A83FC", Non-VolatileWWPN = "10000000 C93D6A8A", WWNN = "20000000 C93D6A8A" } adapter2 = new { FactoryIEEE = "10000000 C97A83FD", Non-VolatileWWPN = "10000000 C93D6A8B", WWNN = "20000000 C93D6A8B" } }

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Replacement for deprecated SQL Server User Defined Type with a bound Rule and Default

    - by Adam Jones
    We have a User Defined Data Type of YesNo which has an which is an alias for char(1). The type has a bound Rule (must be Y or N) and a Default (N). The aim of this is that when any of the development team create a new field of type YesNo the rule and default are automatically bound to the new column. Rules and Defaults have been deprecated and won't be available in the next a future version of SQL Server, is there another way to achieve the same functionality? I should add that I'm aware that I could use CHECK and DEFAULT constraints to replicate the functionality of the bound Rule and Defalut objects, however these would have to be applied at each usage of the type, rather than getting the functionality 'for free' by using a UDT which has a bound Rule and Default. The post relates to a database that backs an existing application, rather than a new development, so I'm aware that our use of UDT's is less than optimal. I suspect the answer to the question is 'No', however normally when features are deprecated there's usually an alternative syntax that can be used as a drop in replacement so I wanted to pose the question in-case someone knew of an alternative.

    Read the article

  • How can I find out if two arguments are instances of the same, but unknown class?

    - by Ingmar
    Let us say we have a method which accepts two arguments o1 and o2 of type Object and returns a boolean value. I want this method to return true only when the arguments are instances of the same class, e.g.: foo(new Integer(4),new Integer(5)); Should return true, however: foo(new SomeClass(), new SubtypeSomeClass()); should return false and also: foo(new Integer(3),"zoo"); should return false. I believe one way is to compare the fully qualified class names: public boolean foo(Object o1, Object o2){ Class<? extends Object> c1 = o1.getClass(); Class<? extends Object> c2 = o2.getClass(); if(c1.getName().equals(c2.getName()){ return true;} return false; } An alternative conditional statement would be : if (c1.isAssignableFrom(c2) && c2.isAssignableFrom(c1)){ return true; } The latter alternative is rather slow. Are there other alternatives to this problem?

    Read the article

  • How to parse a string into a nullable int in C# (.NET 3.5)

    - by Glenn Slaven
    I'm wanting to parse a string into a nullable int in C#. ie. I want to get back either the int value of the string or null if it can't be parsed. I was kind of hoping that this would work int? val = stringVal as int?; But that won't work, so the way I'm doing it now is I've written this extension method public static int? ParseNullableInt(this string value) { if (value == null || value.Trim() == string.Empty) { return null; } else { try { return int.Parse(value); } catch { return null; } } } Is there a better way of doing this? EDIT: Thanks for the TryParse suggestions, I did know about that, but it worked out about the same. I'm more interested in knowing if there is a built-in framework method that will parse directly into a nullable int?

    Read the article

  • How to perform Linq select new with datetime in SQL 2008

    - by kd7iwp
    In our C# code I recently changed a line from inside a linq-to-sql select new query as follows: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.Year.ToString() + "-" + p.OrderDate.Value.Month.ToString() + "-" + p.OrderDate.Value.Day.ToString() : "") To: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") The change makes the line smaller and cleaner. It also works fine with our SQL 2008 database in our development environment. However, when the code deployed to our production environment which uses SQL 2005 I received an exception stating: Nullable Type must have a value. For further analysis I copied (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") into a string (outside of a Linq statement) and had no problems at all, so it only causes an in issue inside my Linq. Is this problem just something to do with SQL 2005 using different date formats than from SQL 2008? Here's more of the Linq: dt = FilteredOrders.Where(x => x != null).Select(p => new { Order = p.OrderId, link = "/order/" + p.OrderId.ToString(), StudentId = (p.PersonId.HasValue ? p.PersonId.Value : 0), FirstName = p.IdentifierAccount.Person.FirstName, LastName = p.IdentifierAccount.Person.LastName, DeliverBy = p.DeliverBy, OrderDate = p.OrderDate.HasValue ? p.OrderDate.Value.Date.ToString("yyyy-mm-dd") : ""}).ToDataTable(); This is selecting from a List of Order objects. The FilteredOrders list is from another linq-to-sql query and I call .AsEnumerable on it before giving it to this particular select new query. Doing this in regular code works fine: if (o.OrderDate.HasValue) tempString += " " + o.OrderDate.Value.Date.ToString("yyyy-mm-dd");

    Read the article

  • How to get the real type of a value inside string?

    - by CuSS
    I was searching here on StackOverflow about converting string to the real value and i didn't found. I need a function like "gettype" that does something like the result above, but i can't do it all :s gettypefromstring("1.234"); //returns (doble)1,234; gettypefromstring("1234"); //returns (int)1234; gettypefromstring("a"); //returns (char)a; gettypefromstring("true"); //returns (bool)true; gettypefromstring("khtdf"); //returns (string)"khtdf"; Thanks to all :)

    Read the article

  • How can I override TryParse?

    - by cyclotis04
    I would like to override bool's TryParse method to accept "yes" and "no." I know the method I want to use (below) but I don't know how to override bool's method. ... bool TryParse(string value, out bool result) { if (value == "yes") { result = true; return true; } else if (value == "no") { result = false; return true; } else { return bool.TryParse(value, result); } }

    Read the article

  • Parsing boolean from configuration section in web.config

    - by Bloopy
    I have a custom configuration section in my web.config. One of my classes is grabbing from this: <myConfigSection LabelVisible="" TitleVisible="true"/> I have things working for parsing if I have true or false, however if the attribute is blank I am getting errors. When the config section tries to map the class to the configuration section I get an error of "not a valid value for bool" on the 'LabelVisible' part. How can I parse "" as false in my myConfigSection class? I have tried this: [ConfigurationProperty("labelsVisible", DefaultValue = true, IsRequired = false)] public bool? LabelsVisible { get { return (bool?)this["labelsVisible"]; } But when I try and use what is returned like so: graph.Label.Visible = myConfigSection.LabelsVisible; I get an error of: 'Cannot implicitly convert type 'bool?' to 'bool'. An explicit conversion exists (are you missing a cast?) Thanks for any suggestions!

    Read the article

  • how do i ininitialize a float to it's max/min value?

    - by Faken
    How do i hard code an absolute maximum or minimum value for a float or double? I want to search out the max/min of an array by simply iterating through and catching the largest. There are also positive and negative infinity for floats, should i use those instead? if so, how do i denote that in my code?

    Read the article

  • Why doesn't C# do "simple" type inference on generics?

    - by Ken Birman
    Just curious: sure, we all know that the general case of type inference for generics is undecidable. And so C# won't do any kind of subtyping at all: if Foo<T> is a generic, Foo<int> isn't a subtype of Foo<T>, or Foo<Object> or of anything else you might cook up. And sure, we all hack around this with ugly interface or abstract class definitions. But... if you can't beat the general problem, why not just limit the solution to cases that are easy. For example, in my list above, it is OBVIOUS that Foo<int> is a subtype of Foo<T> and it would be trivial to check. Same for checking against Foo<Object>. So is there some other deep horror that would creep forth from the abyss if they were to just say, aw shucks, we'll do what we can? Or is this just some sort of religious purity on the part of the language guys at Microsoft?

    Read the article

  • How To Test if a Type is Anonymous?

    - by DaveDev
    Hi Guys I have the following method which serialises an object to a HTML tag. I only want to do this though if the type isn't Anonymous. private void MergeTypeDataToTag(object typeData) { if (typeData != null) { Type elementType = typeData.GetType(); if (/* elementType != AnonymousType */) { _tag.Attributes.Add("class", elementType.Name); } // do some more stuff } } Can somebody show me how to achieve this? Thanks Dave

    Read the article

  • Please help with passing multidimensional arrays

    - by nn
    Hi, I'm writing a simple test program to pass multidimensional arrays. I've been struggling to get the signature of the callee function. void p(int (*s)[100], int n) { ... } In the code I have: int s1[10][100], s2[10][1000]; p(s1, 100); This code appears to work, but it's not what I intended. I want to the function p to be oblivious whether the range of values is 100 or 1000, but it should know there are 10 pointers. I tried as a first attempt: void p(int (*s)[10], int n) // n = # elements in the range of the array and also: void p(int **s, int n) // n = # of elements in the range of the array But to no avail can I seem to get this correct. I don't want to hardcode the 100 or 1000, but instead pass it in, but there will always be 10 arrays. Obviously, I want to avoid having to declare the function: void p(int *s1, int *s2, int *s3, ..., int *s10, int n) FYI, I'm looking at the answers to a similar question but still confused.

    Read the article

  • Haskell Type error

    - by Jon
    I am getting a Couldn't match expected type error on this code and am not sure why. Would appreciate it if someone could point me in the right direction as to fixing it. import qualified Data.ByteString.Lazy as S import Data.Binary.Get import Data.Word getBinary :: Get Word16 getBinary = do a <- getWord16be "Test.class" return (a) main :: IO () main = do contents <- S.getContents print getBinary contents Specifically it cannot match expected type 'S.ByteString - IO ()' to inferred type 'IO ()'

    Read the article

  • Use string value to create new instance

    - by Brian David Berman
    I have a few classes: SomeClass1, SomeClass2. How can I create a new instance of one of these classes by using the class name from a string? Normally, I would do: var someClass1 = new SomeClass1(); How can I create this instance from the following: var className = "SomeClass1"; I am assuming I should use Type.GetType() or something but I can't figure it out. Thanks.

    Read the article

  • How to create mapping for a List<SomeNativeType> in FluentNhibernate ?

    - by Mahesh Velaga
    Hi all, I am trying to create a mapping file for the following Model using Fluent NHibernate. But, I am not sure of how to do the mapping for the List in the mapping file. public class MyClass { public virtual Guid Id { get; set; } public virtual string Name { get; set; } public virtual List<string> MagicStrings { get; set; } } public class EnvironmentMapping : ClassMap<Models.Environment> { public EnvironmentMapping() { Id(x => x.Id); Map(x => x.Name); //HasMany(x => string) What should this be ? } } Help in this regard is much appreciated. Thanks!

    Read the article

  • Is it good practice to use std::size_t all over the place?

    - by dehmann
    I have a lot of constants in my code that are unsigned numbers, e.g. counters, frequency cutoffs, lengths, etc. I started using std::size_t for all of these, instead of int or unsigned int. Is that the right thing to do? I started it because the STL containers use it for their sizes, it's used for string position, etc.

    Read the article

  • testing .mobile mime format with capybara / rspec

    - by Chris Beck
    For detecting and responding to mobile user agents, I'm using Mime::Type.register_alias "text/html", :mobile and the approach I'm wondering what is the best approach to test with capybara. This article suggests setting up an iphone driver with Capybara.register_driver :iphone do |app| http://blog.plataformatec.com.br/2011/03/configuring-user-agents-with-capybara-selenium-webdriver/ but I'd like a more flexible approach where the mime type is set via the url extension localhost/index.mobile and where I can do this visit user_path( format: :mobile) Rails understands the extension and sets the format in the params hash, but how do I get the url helper methods to add that to all urls as a file extension?

    Read the article

  • I'm using the correct content type & Headers so Why is FireFox saving Zip Files without extensions

    - by The_AlienCoder
    Users on my site have the option to download all the photos in an album as a zip file.The Zip file is dynamically created and saved to Response.OutPutStream to be detected as a file download on the user's browser. Here is the Header and Content-type I am outputing context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); context.Response.ContentType = "application/x-zip-compressed"; ..Well everything works fine with every browser except FireFox. Although Firefox correctly detects the download as a Zip file, It saves the file without the .zip extension. I thought adding this header context.Response.AddHeader("Content-Disposition", "attachment; filename=Photos.zip"); ..is supposed to force FF to save the extension. I believe I am following the correct protocol so why is FF behaving this way and how do I fix this?

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

  • Implicit parameter in Scalaz

    - by Thomas Jung
    I try to find out why the call Ø in scalaz.ListW.<^> works def <^>[B: Zero](f: NonEmptyList[A] => B): B = value match { case Nil => Ø case h :: t => f(Scalaz.nel(h, t)) } My minimal theory is: trait X[T]{ def y : T } object X{ implicit object IntX extends X[Int]{ def y = 42 } implicit object StringX extends X[String]{ def y = "y" } } trait Xs{ def ys[T](implicit x : X[T]) = x.y } class A extends Xs{ def z[B](implicit x : X[B]) : B = ys //the call Ø } Which produces: import X._ scala> new A().z[Int] res0: Int = 42 scala> new A().z[String] res1: String = y Is this valid? Can I achieve the same result with fewer steps?

    Read the article

  • Javascript instanceof & typeof in GWT (JSNI)

    - by rybz
    Hi, I've encountered an curious problem while trying to use some objects through JSNI in GWT. Let's say we have javscript file with the function defined: test.js: function test(arg){ var type = typeof(arg); if (arg instanceof Array) alert('Array'); if (arg instanceof Object) alert('Object'); if (arg instanceof String) alert('String'); } And the we want to call this function user JSNI: public static native void testx()/ *-{ $wnd.test( new Array(1, 2, 3) ); $wnd.test( [ 1, 2, 3 ] ); $wnd.test( {val:1} ); $wnd.test( "Some text" ); }-*/; The questions are: why instanceof instructions will always return false? why typeof will always return "object" ? how to pass these objects so that they were recognized properly?

    Read the article

  • how do I initialize a float to its max/min value?

    - by Faken
    How do I hard code an absolute maximum or minimum value for a float or double? I want to search out the max/min of an array by simply iterating through and catching the largest. There are also positive and negative infinity for floats, should I use those instead? If so, how do I denote that in my code?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >