Search Results

Search found 972 results on 39 pages for 'scala 2 7 7'.

Page 22/39 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Why is there an implicit conversion from Float/Double to BigDecimal, but not from String?

    - by soc
    Although the situation of conversion from Doubles to BigDecimals has improved a bit compared to Java scala> new java.math.BigDecimal(0.2) res0: java.math.BigDecimal = 0.20000000000000001110223024625156... scala> BigDecimal(0.2) res1: scala.math.BigDecimal = 0.2 and things like val numbers: List[BigDecimal] = List(1.2, 3.2, 0.7, 0.8, 1.1) work really well, wouldn't it be reasonable to have an implicit conversion like implicit def String2BigDecimal(s: String) = BigDecimal(s) available by default which can convert Strings to BigDecimals like this? val numbers: List[BigDecimal] = List("1.2", "3.2", "0.7", "0.8", "1.1") Or am I missing something and Scala resolved all "problems" of Java with using the BigDecimal constructor with a floating point value instead of a String, and BigDecimal(String) is basically not needed anymore in Scala?

    Read the article

  • Is there an equivalent in Scala to Python's more general map function?

    - by wheaties
    I know that Scala's Lists have a map implementation with signature (f: (A) => B):List[B] and a foreach implementation with signature (f: (A) => Unit):Unit but I'm looking for something that accepts multiple iterables the same way that the Python map accepts multiple iterables. I'm looking for something with a signature of (f: (A,B) => C, Iterable[A], Iterable[B] ):Iterable[C] or equivalent. Is there a library where this exists or a comparable way of doing similar?

    Read the article

  • Any way to access the type of a Scala Option declaration at runtime using reflection?

    - by Graham Lea
    So, I have a Scala class that looks like this: class TestClass { var value: Option[Int] = None } and I'm tackling a problem where I have a String value and I want to coerce it into that Option[Int] at runtime using reflection. To do so, I need to know that the field is an Option and that the type parameter of the Option is Int. What are my options for figuring out that the type of 'value' is Option[Int] at runtime (i.e. using reflection)? I have seen similar problems solved by annotating the field, e.g. @OptionType(Int.class). I'd prefer a solution that didn't require annotations on the reflection target if possible.

    Read the article

  • How do you write an idiomatic Scala Quicksort function?

    - by Don Mackenzie
    I recently answered a question with an attempt at writing a quicksort function in scala, I'd seen something like the code below written somewhere. def qsort(l: List[Int]): List[Int] = { l match { case Nil => Nil case pivot::tail => qsort(tail.filter(_ < pivot)) ::: pivot :: qsort(tail.filter(_ >= pivot)) } } My answer received some constructive criticism pointing out that List was a poor choice of collection for quicksort and secondly that the above wasn't tail recursive. I tried to re-write the above in a tail recursive manner but didn't have much luck. Is it possible to write a tail recursive quicksort? or, if not, how can it be done in a functional style? Also what can be done to maximise the efficiency of the implementation? Thanks in advance.

    Read the article

  • How can I avoid mutable variables in Scala when using ZipInputStreams and ZipOutpuStreams?

    - by pr1001
    I'm trying to read a zip file, check that it has some required files, and then write all valid files out to another zip file. The basic introduction to java.util.zip has a lot of Java-isms and I'd love to make my code more Scala-native. Specifically, I'd like to avoid the use of vars. Here's what I have: val fos = new FileOutputStream("new.zip"); val zipOut = new ZipOutputStream(new BufferedOutputStream(fos)); while (zipIn.available == 1) { val entry = zipIn.getNextEntry if (entryIsValid(entry)) { val fos = new FileOutputStream("subdir/" + entry.getName()); val dest = new BufferedOutputStream(fos); // read data into the data Array var data: Array[Byte] = null var count = zip.read(data) while (count != -1) { dest.write(data, 0, count) count = zip.read(data) } dest.flush dest.close } }

    Read the article

  • How do I make lambda functions generic in Scala?

    - by Electric Coffee
    As most of you probably know you can define functions in 2 ways in scala, there's the 'def' method and the lambda method... making the 'def' kind generic is fairly straight forward def someFunc[T](a: T) { // insert body here what I'm having trouble with here is how to make the following generic: val someFunc = (a: Int) => // insert body here of course right now a is an integer, but what would I need to do to make it generic? val someFunc[T] = (a: T) => doesn't work, neither does val someFunc = [T](a: T) => Is it even possible to make them generic, or should I just stick to the 'def' variant?

    Read the article

  • scala 2.8.0.RC2 compiler problem on pattern matching statement?

    - by gruenewa
    Why does the following module not compile on Scala 2.8.RC[1,2]? object Test { import util.matching.Regex._ val pVoid = """\s*void\s*""".r val pVoidPtr = """\s*(const\s+)?void\s*\*\s*""".r val pCharPtr = """\s*(const\s+)GLchar\s*\*\s*""".r val pIntPtr = """\s*(const\s+)?GLint\s*\*\s*""".r val pUintPtr = """\s*(const\s+)?GLuint\s*\*\s*""".r val pFloatPtr = """\s*(const\s+)?GLfloat\s*\*\s*""".r val pDoublePtr = """\s*(const\s+)?GLdouble\s*\*\s*""".r val pShortPtr = """\s*(const\s+)?GLshort\s*\*\s*""".r val pUshortPtr = """\s*(const\s+)?GLushort\s*\*\s*""".r val pInt64Ptr = """\s*(const\s+)?GLint64\s*\*\s*""".r val pUint64Ptr = """\s*(const\s+)?GLuint64\s*\*\s*""".r def mapType(t: String): String = t.trim match { case pVoid() => "Unit" case pVoidPtr() => "ByteBuffer" case pCharPtr() => "CharBuffer" case pIntPtr() | pUintPtr() => "IntBuffer" case pFloatPtr() => "FloatBuffer" case pShortPtr() | pUshortPtr() => "ShortBuffer" case pDoublePtr() => "DoubleBuffer" case pInt64Ptr() | pUint64Ptr() => "LongBuffer" case x => x } }

    Read the article

  • Scala loop returns as Unit and compiler points to "for" syntax?

    - by DeLongey
    Seems like Unit is the theme of my troubles today. I'm porting a JSON deserializer that uses Gson and when it comes to this for loop: def deserialize(json:JsonElement, typeOfT:Type, context:JsonDeserializationContext) = { var eventData = new EventData(null, null) var jsonObject = json.getAsJsonObject for(entry <- jsonObject.entrySet()) { var key = entry.getKey() var element = entry.getValue() element if("previous_attributes".equals(key)) { var previousAttributes = new scala.collection.mutable.HashMap[String, Object]() populateMapFromJSONObject(previousAttributes, element.getAsJsonObject()) eventData.setPreviousAttributes(previousAttributes) eventData } else if ("object".equals(key)) { val `type` = element.getAsJsonObject().get("object").getAsString() var cl = objectMap.get(`type`).asInstanceOf[StripeObject] var `object` = abstractObject.retrieve(cl, key) eventData.setObject(`object`) eventData } } } The compiler spits out the error type mismatch; found : Unit required: com.stripe.EventData and it points to this line here: for(entry <- jsonObject.entrySet()) Questions Confirm that it is indeed the Gson method entrySet() appearing as unit? If not, what part of the code is creating the issue? I've set return types/values for eventData class methods Is there a workaround for the Gson Unit issue? Thanks!

    Read the article

  • Scala, represent pattern of boolean tuple into something else.

    - by Berlin Brown
    This is a cellular automata rule (input Boolean == Left, Center, Right Cell) and output Boolean . What is a better way to represent this in Scala. trait Rule { def ruleId() : Int def rule(inputState:(Boolean, Boolean, Boolean)) : Boolean override def toString : String = "Rule:" + ruleId } class Rule90 extends Rule { def ruleId() = 90 def rule(inputState:(Boolean, Boolean, Boolean)) : Boolean = { // Verbose version, show all 8 states inputState match { case (true, true, true) => false case (true, false, true) => false case (false, true, false) => false case (false, false, false) => false case _ => true } } }

    Read the article

  • In scala can I pass repeated parameters to other methods?

    - by Fred Haslam
    Here is something I can do in java, take the results of a repeated parameter and pass it to another method: public void foo(String ... args){bar(args);} public void bar(String ... args){System.out.println("count="+args.length);} In scala it would look like this: def foo(args:String*) = bar(args) def bar(args:String*) = println("count="+args.length) But this won't compile, the bar signature expects a series of individual strings, and the args passed in is some non-string structure. For now I'm just passing around arrays. It would be very nice to use starred parameters. Is there some way to do it?

    Read the article

  • How is `toString` in `scala.Enumeration$Value` implemented?

    - by Red Hyena
    I have an enum Fruit defined as: object Fruit extends Enumeration { val Apple, Banana, Cherry = Value } Now printing values of this enum, on Scala 2.7.x gives: scala> Fruit foreach println line1$object$$iw$$iw$Fruit(0) line1$object$$iw$$iw$Fruit(1) line1$object$$iw$$iw$Fruit(2) However the same operation on Scala 2.8 gives: scala> Fruit foreach println warning: there were deprecation warnings; re-run with -deprecation for details Apple Banana Cherry My question is: How is the method toString in Enumeration in Scala 2.8 is implemented? I tried looking into the source of Enumeration but couldn't understand anything.

    Read the article

  • How can I pass a type as a parameter in scala?

    - by rsan
    I'm having a really hard time trying to figure out how can I store or pass a type in scala. What I want to achive is something like this: abstract class Foo( val theType : type ) object Foo{ case object Foo1 extends Foo(String) case object Foo2 extends Foo(Long) } So at some point I can do this: theFoo match{ case String => "Is a string" case Long => "Is a long" } and when obtaining the object being able to cast it: theFoo.asInstanceOf[Foo1.theType] Is this possible? If is possible, is a good aproach? What I'm trying to achieve ultimately is writing a pseudo schema for byte stream treatment. E.g if I have an schema Array(Foo1,Foo1,Foo2,Foo3,Foo1) I could parse Arrays of bytes that complain with that schema, if at some point I have a different stream of bytes I could just write a new schema Array(Foo3, Foo4, Foo5) without having to reimplement parsing logic. Regards,

    Read the article

  • scala coalesces multiple function call parameters into a Tuple -- can this be disabled?

    - by landon9720
    This is a troublesome violation of type safety in my project, so I'm looking for a way to disable it. It seems that if a function takes an AnyRef (or a java.lang.Object), you can call the function with any combination of parameters, and Scala will coalesce the parameters into a Tuple object and invoke the function. In my case the function isn't expecting a Tuple, and fails at runtime. I would expect this situation to be caught at compile time. object WhyTuple { def main(args: Array[String]): Unit = { fooIt("foo", "bar") } def fooIt(o: AnyRef) { println(o.toString) } } Output: (foo,bar)

    Read the article

  • How to yield a single element from for loop in scala?

    - by Julio Faerman
    Much like this question: Functional code for looping with early exit Say the code is def findFirst[T](objects: List[T]):T = { for (obj <- objects) { if (expensiveFunc(obj) != null) return /*???*/ Some(obj) } None } How to yield a single element from a for loop like this in scala? I do not want to use find, as proposed in the original question, i am curious about if and how it could be implemented using the for loop. * UPDATE * First, thanks for all the comments, but i guess i was not clear in the question. I am shooting for something like this: val seven = for { x <- 1 to 10 if x == 7 } return x And that does not compile. The two errors are: - return outside method definition - method main has return statement; needs result type I know find() would be better in this case, i am just learning and exploring the language. And in a more complex case with several iterators, i think finding with for can actually be usefull. Thanks commenters, i'll start a bounty to make up for the bad posing of the question :)

    Read the article

  • Idiomatic Scala way to deal with base vs derived class field names?

    - by Gregor Scheidt
    Consider the following base and derived classes in Scala: abstract class Base( val x : String ) final class Derived( x : String ) extends Base( "Base's " + x ) { override def toString = x } Here, the identifier 'x' of the Derived class parameter overrides the field of the Base class, so invoking toString like this: println( new Derived( "string" ).toString ) returns the Derived value and gives the result "string". So a reference to the 'x' parameter prompts the compiler to automatically generate a field on Derived, which is served up in the call to toString. This is very convenient usually, but leads to a replication of the field (I'm now storing the field on both Base and Derived), which may be undesirable. To avoid this replication, I can rename the Derived class parameter from 'x' to something else, like '_x': abstract class Base( val x : String ) final class Derived( _x : String ) extends Base( "Base's " + _x ) { override def toString = x } Now a call to toString returns "Base's string", which is what I want. Unfortunately, the code now looks somewhat ugly, and using named parameters to initialize the class also becomes less elegant: new Derived( _x = "string" ) There is also a risk of forgetting to give the derived classes' initialization parameters different names and inadvertently referring to the wrong field (undesirable since the Base class might actually hold a different value). Is there a better way? Edit: To clarify, I really only want the Base values; the Derived ones just seem necessary for initializing the Base ones. The example only references them to illustrate the ensuing issues. It might be nice to have a way to suppress automatic field generation if the derived class would otherwise end up hiding a base class field.

    Read the article

  • How to flatten list of options using higher order functions?

    - by Synesso
    Using Scala 2.7.7: If I have a list of Options, I can flatten them using a for-comprehension: val listOfOptions = List(None, Some("hi"), None) listOfOptions: List[Option[java.lang.String]] = List(None, Some(hi), None) scala> for (opt <- listOfOptions; string <- opt) yield string res0: List[java.lang.String] = List(hi) I don't like this style, and would rather use a HOF. This attempt is too verbose to be acceptable: scala> listOfOptions.flatMap(opt => if (opt.isDefined) Some(opt.get) else None) res1: List[java.lang.String] = List(hi) Intuitively I would have expected the following to work, but it doesn't: scala> List.flatten(listOfOptions) <console>:6: error: type mismatch; found : List[Option[java.lang.String]] required: List[List[?]] List.flatten(listOfOptions) Even the following seems like it should work, but doesn't: scala> listOfOptions.flatMap(_: Option[String]) <console>:6: error: type mismatch; found : Option[String] required: (Option[java.lang.String]) => Iterable[?] listOfOptions.flatMap(_: Option[String]) ^ The best I can come up with is: scala> listOfOptions.flatMap(_.toList) res2: List[java.lang.String] = List(hi) ... but I would much rather not have to convert the option to a list. That seems clunky. Any advice?

    Read the article

  • How to further improve error messages in Scala parser-combinator based parsers?

    - by rse
    I've coded a parser based on Scala parser combinators: class SxmlParser extends RegexParsers with ImplicitConversions with PackratParsers { [...] lazy val document: PackratParser[AstNodeDocument] = ((procinst | element | comment | cdata | whitespace | text)*) ^^ { AstNodeDocument(_) } [...] } object SxmlParser { def parse(text: String): AstNodeDocument = { var ast = AstNodeDocument() val parser = new SxmlParser() val result = parser.parseAll(parser.document, new CharArrayReader(text.toArray)) result match { case parser.Success(x, _) => ast = x case parser.NoSuccess(err, next) => { tool.die("failed to parse SXML input " + "(line " + next.pos.line + ", column " + next.pos.column + "):\n" + err + "\n" + next.pos.longString) } } ast } } Usually the resulting parsing error messages are rather nice. But sometimes it becomes just sxml: ERROR: failed to parse SXML input (line 32, column 1): `"' expected but `' found ^ This happens if a quote characters is not closed and the parser reaches the EOT. What I would like to see here is (1) what production the parser was in when it expected the '"' (I've multiple ones) and (2) where in the input this production started parsing (which is an indicator where the opening quote is in the input). Does anybody know how I can improve the error messages and include more information about the actual internal parsing state when the error happens (perhaps something like a production rule stacktrace or whatever can be given reasonably here to better identify the error location). BTW, the above "line 32, column 1" is actually the EOT position and hence of no use here, of course.

    Read the article

  • Case Class naming convention

    - by KChaloux
    In my recent adventures in Scala, I've found case classes to be a really nice alternative to enums when I need to include a bit of logic or several values with them. I often find myself writing structures that look like this, however: object Foo{ case class Foo(name: String, value: Int, other: Double) val BAR = Foo("bar", 1, 1.0) val BAZ = Foo("baz", 2, 1.5) val QUUX = Foo("quux", 3, 1.75) } I'm primarily worried here about the naming of the object and the case class. Since they're the same thing, I end up with Foo.Foo to get to the inner class. Would it be wise to name the case class something along the lines of FooCase instead? I'm not sure if the potential ambiguity might mess with the type system if I have to do anything with subtypes or inheritance.

    Read the article

  • null values vs "empty" singleton for optional fields

    - by Uko
    First of all I'm developing a parser for an XML-based format for 3D graphics called XGL. But this question can be applied to any situation when you have fields in your class that are optional i.e. the value of this field can be missing. As I was taking a Scala course on coursera there was an interesting pattern when you create an abstract class with all the methods you need and then create a normal fully functional subclass and an "empty" singleton subclass that always returns false for isEmpty method and throws exceptions for the other ones. So my question is: is it better to just assign null if the optional field's value is missing or make a hierarchy described above and assign it an empty singleton implementation?

    Read the article

  • Using an actor model versus a producer-consumer model?

    - by hewhocutsdown
    I'm doing some early-stage research towards architecting a new software application. Concurrency and multithreading will likely play a significant part, so I've been reading up on the various topics. The producer-consumer model, at least how it is expressed in Java, has some surface similarities but appears to be deeply dissimilar to the actor model in use with languages such as Erlang and Scala. I'm having trouble finding any good comparative data, or specific reasons to use or avoid the one or the other. Is the actor model even possible with Java or C#, or do you have do use one of the languages built for the purpose? Is there a third way?

    Read the article

  • Recommended book on Actors concurrency model (patterns, pitfalls, etc.)?

    - by Larry OBrien
    The Actors concurrency model is clearly gaining favor. Is there a good book that presents the patterns and pitfalls of the model? I am thinking about something that would discuss, for instance, the problems of consistency and correctness in the context of hundreds or thousands of independent Actors. It would be okay if it were associated with a specific language (erlang, I would imagine, since that seems universally regarded as the proven implementation of Actors), but I am hoping for something more than an introductory chapter or two. (FWIW, I'm actually most interested in Actors as they are implemented in Scala.)

    Read the article

  • Actor library / framework for C++

    - by Giorgio
    In the C++ project I am working for we would like to use something like Scala actors and remote actors (see e.g. this tutorial). Being able to use remote actors (actors living in different processes, possibly on different machines and communicating via TCP/IP) has higher priority for us because we have an application consisting of several processes deployed on different machines. Being able to use several actors living in the same process (possibly different threads) is also interesting, but has lower priority for the moment. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Before I dive too deep into the details and build an extended example with Theron, I wanted to ask if anybody has experience with any of these libraries and which one they would recommend.

    Read the article

  • How can I install the Play! framework using typesafe-stack? [migrated]

    - by lhk
    I'd like to create a new project with the Play! framework. My system is Mint 12 64bit. Since the newest version of Play! is already bundled with the typesafe-stack, I thought installation would be easy. I added the typesafe repo, then I apt-get updated and apt-get installed typesafe-stack with the command g8 typesafehub/play-scala. I successfully created a new project in my home folder. Now the problems begin: I don't know how to access Play! with this installation. After creating the project, I tried to convert it into an Eclipse project, it but there's no play command available in the terminal. How can I get a "standard" Play! installation on Linux? What happens to the tools bundled in the typesafe stack - Where do they go?

    Read the article

  • If there's no problem treating a statement as an expression, why was there a distinction in the first place in some programming languages?

    - by cdmckay
    Why do we have the distinction between statements and expressions in most programming languages? For example, in Java, assuming f and g return ints, this still won't compile because it's a statement and statements don't return values. // won't compile int i = if (pred) { f(x); } else { g(x); } but in Scala, it's very happy with treating if as an expression. // compiles fine val i: Int = if (pred) f(x) else g(x) So if there's no problem treating an if statement as an expression, why was there a distinction in the first place?

    Read the article

  • Choosing a new programming language to learn [on hold]

    - by Xelom
    I'm a Microsoft Stack(ASP.NET, C#) developer. Mainly, I develop server side software, windows services, restful apis etc. My client side interaction is really really low. So aside from C# I want to learn a new language. Time is precious and I want to give my focus to a language which have a future. My language list is: Scala (Powerful usage in Twitter) Go (Getting popular and channels are pretty awesome) Erlang (Stable server side programs. Used at Whatsapp) You can give advice for the above or you can give me a better option. My only exception is Objective-C. I don't want to get in that one. Thanks

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >