Search Results

Search found 972 results on 39 pages for 'scala 2 9'.

Page 23/39 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How do I create a partial function with generics in scala?

    - by Matteo Caprari
    Hello. I'm trying to write a performance measurements library for Scala. My idea is to transparently 'mark' sections so that the execution time can be collected. Unfortunately I wasn't able to bend the compiler to my will. An admittedly contrived example of what I have in mind: // generate a timing function val myTimer = mkTimer('myTimer) // see how the timing function returns the right type depending on the // type of the function it is passed to it val act = actor { loop { receive { case 'Int => val calc = myTimer { (1 to 100000).sum } val result = calc + 10 // calc must be Int self reply (result) case 'String => val calc = myTimer { (1 to 100000).mkString } val result = calc + " String" // calc must be String self reply (result) } Now, this is the farthest I got: trait Timing { def time[T <: Any](name: Symbol)(op: => T) :T = { val start = System.nanoTime val result = op val elapsed = System.nanoTime - start println(name + ": " + elapsed) result } def mkTimer[T <: Any](name: Symbol) : (() => T) => () => T = { type c = () => T time(name)(_ : c) } } Using the time function directly works and the compiler correctly uses the return type of the anonymous function to type the 'time' function: val bigString = time('timerBigString) { (1 to 100000).mkString("-") } println (bigString) Great as it seems, this pattern has a number of shortcomings: forces the user to reuse the same symbol at each invocation makes it more difficult to do more advanced stuff like predefined project-level timers does not allow the library to initialize once a data structure for 'timerBigString So here it comes mkTimer, that would allow me to partially apply the time function and reuse it. I use mkTimer like this: val myTimer = mkTimer('aTimer) val myString= myTimer { (1 to 100000).mkString("-") } println (myString) But I get a compiler error: error: type mismatch; found : String required: () => Nothing (1 to 100000).mkString("-") I get the same error if I inline the currying: val timerBigString = time('timerBigString) _ val bigString = timerBigString { (1 to 100000).mkString("-") } println (bigString) This works if I do val timerBigString = time('timerBigString) (_: String), but this is not what I want. I'd like to defer typing of the partially applied function until application. I conclude that the compiler is deciding the return type of the partial function when I first create it, chosing "Nothing" because it can't make a better informed choice. So I guess what I'm looking for is a sort of late-binding of the partially applied function. Is there any way to do this? Or maybe is there a completely different path I could follow? Well, thanks for reading this far -teo

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • How do you encode Algebraic Data Types in a C#- or Java-like language?

    - by Jörg W Mittag
    There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a) consmap f Empty = Empty consmap f (ConsCell a b) = ConsCell (f a) (consmap f b) l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty)) consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] { def map[U](f: T => U): ConsList[U] } object Empty extends ConsList[Nothing] { override def map[U](f: Nothing => U) = this } final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] { override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f)) } val l = (new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty))) l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? >) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement?

    Read the article

  • OS choice for functional developing

    - by Carsten König
    I'm mainly a .NET developer so I normaly use Windows/VisualStudio (that means: I'm spoiled) but I'm enjoying Haskell and other (mostly functional) languagues in my spare time. Now for Haskell the windows-support is ok (you can get the Haskell-Platform) but latley I tried to get a basic Clojure/Scheme environment set up and it's just a pain on windows. So I'm thinking about trying out another OS for better tooling and languague support. Of course that leaves me with MacOS or some Linux distribution. I never used MacOS before and of course Linux would be cheaper (free) and I don't think I can parallel-boot MacOS on your normal PC-Hardware (can you?). PLUS: I don't have a clue about the tools you can use on those (to me) forign OSs. To make it short: I want to explore more Haskell, Clojure, Scala, Scheme and of course need at least good tooling for JavaScript/HTML5/Css. Support for .NET/Mono/F# would be great but for this I will still have my Win7 boot. So I like to know: - what is your prefered OS, Distribution (is Ubuntu viable?) - what Editor/IDE are you using Thank you for your help! PS: I'm not sure if this is the right place for this question but I surely hope so - if not please let me know where I should move this to (StackOverflow don't seem to be the right place IMHO)

    Read the article

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • What are the disadvantages to declaring Scala case classes?

    - by Graham Lea
    If you're writing code that's using lots of beautiful, immutable data structures, case classes appear to be a godsend, giving you all of the following for free with just one keyword: Everything immutable by default Getters automatically defined Decent toString() implementation Compliant equals() and hashCode() Companion object with unapply() method for matching But what are the disadvantages of defining an immutable data structure as a case class? What restrictions does it place on the class or its clients? Are there situations where you should prefer a non-case class?

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • Scala isn't allowing me to execute a batch file whose path contains spaces.Same Java code does.What

    - by Geo
    Here's the code I have: var commandsBuffer = List[String]() commandsBuffer ::= "cmd.exe" commandsBuffer ::= "/c" commandsBuffer ::= '"'+vcVarsAll.getAbsolutePath+'"' commandsBuffer ::= "&&" otherCommands.foreach(c => commandsBuffer ::= c) val asArray = commandsBuffer.reverse.toArray val processOutput = processutils.Proc.executeCommand(asArray,true) return processOutput otherCommands is an Array[String], containing the following elements: vcbuild /rebuild path to a .sln file vcVarsAll contains the path to Visual Studio's vcvarsall.bat. It's path is C:\tools\microsoft visual studio 2005\vc\vcvarsall.bat. The error I receive is: 'c:\Tools\Microsoft' is not recognized as an internal or external command, operable program or batch file.. The processutils.Proc.executeCommand has the following implementation: def executeCommand(params:Array[String],display:Boolean):(String,String) = { val process = java.lang.Runtime.getRuntime.exec(params) val outStream = process.getInputStream val errStream = process.getErrorStream ... } The same code, executed from Java/Groovy works. What am I doing wrong?

    Read the article

  • How to use objetcs as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance.

    Read the article

  • How to use objects as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance. After receiving an answer from Apocalisp and trying it. Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to: def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message: error: inferred type arguments [Nothing,C] do not conform to method genRndVar's type parameter bounds [E,C genRndVar(c) Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below: case 0 => new c.Neg(genRndExpr[E,C](c, level+1)) After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error: error: type mismatch; found : C#Expr required: c.Expr case 0 = new c.Neg(genRndExpr[E,C](c, level+1)) So, again, I'm stuck. Any help will be appreciated.

    Read the article

  • Using Sub-Types And Return Types in Scala to Process a Generic Object Into a Specific One

    - by pr1001
    I think this is about covariance but I'm weak on the topic... I have a generic Event class used for things like database persistance, let's say like this: class Event( subject: Long, verb: String, directobject: Option[Long], indirectobject: Option[Long], timestamp: Long) { def getSubject = subject def getVerb = verb def getDirectObject = directobject def getIndirectObject = indirectobject def getTimestamp = timestamp } However, I have lots of different event verbs and I want to use pattern matching and such with these different event types, so I will create some corresponding case classes: trait EventCC case class Login(user: Long, timestamp: Long) extends EventCC case class Follow( follower: Long, followee: Long, timestamp: Long ) extends EventCC Now, the question is, how can I easily convert generic Events to the specific case classes. This is my first stab at it: def event2CC[T <: EventCC](event: Event): T = event.getVerb match { case "login" => Login(event.getSubject, event.getTimestamp) case "follow" => Follow( event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp ) // ... } Unfortunately, this is wrong. <console>:11: error: type mismatch; found : Login required: T case "login" => Login(event.getSubject, event.getTimestamp) ^ <console>:12: error: type mismatch; found : Follow required: T case "follow" => Follow(event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp) Could someone with greater type-fu than me explain if, 1) if what I want to do is possible (or reasonable, for that matter), and 2) if so, how to fix event2CC. Thanks!

    Read the article

  • How do you code up a pattern matching block in scala?

    - by egervari
    How do you code a function that takes in a block of code that contains case statements? For instance, in my block of code, I don't want to code a match or a default case... looking something like this myApi { case Whatever() => // code for case 1 case SomethingElse() => // code for case 2 } And inside of my myApi(), it'll actually do the matches. Help?

    Read the article

  • How do I break out of a loop in Scala?

    - by TiansHUo
    For Problem 4 of Project Euler How do I break out a loop? var largest=0 for(i<-999 to 1 by -1) { for (j<-i to 1 by -1) { val product=i*j if (largest>product) // I want to break out here else if(product.toString=product.toString.reverse) largest=largest max product } } And does anyone know how to turn nested for loops into tail recursion?

    Read the article

  • How do you code up a pattern matching code block in scala?

    - by egervari
    How do you code a function that takes in a block of code as a parameter that contains case statements? For instance, in my block of code, I don't want to do a match or a default case explicitly. I am looking something like this myApi { case Whatever() => // code for case 1 case SomethingElse() => // code for case 2 } And inside of my myApi(), it'll actually execute the code block and do the matches. Help?

    Read the article

  • Is there an implementation of rapid concurrent syntactical sugar in scala? eg. map-reduce

    - by TiansHUo
    Passing messages around with actors is great. But I would like to have even easier code. Examples (Pseudo-code) val splicedList:List[List[Int]]=biglist.partition(100) val sum:Int=ActorPool.numberOfActors(5).getAllResults(splicedList,foldLeft(_+_)) where spliceIntoParts turns one big list into 100 small lists the numberofactors part, creates a pool which uses 5 actors and receives new jobs after a job is finished and getallresults uses a method on a list. all this done with messages passing in the background. where maybe getFirstResult, calculates the first result, and stops all other threads (like cracking a password)

    Read the article

  • How can I take any function as input for my Scala wrapper method?

    - by pr1001
    Let's say I want to make a little wrapper along the lines of: def wrapper(f: (Any) => Any): Any = { println("Executing now") val res = f println("Execution finished") res } wrapper { println("2") } Does this make sense? My wrapper method is obviously wrong, but I think the spirit of what I want to do is possible. Am I right in thinking so? If so, what's the solution? Thanks!

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >