Search Results

Search found 6679 results on 268 pages for 'cube processing'.

Page 59/268 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Performing a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); The language I'm using is C#.

    Read the article

  • 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing?

    - by Arthur Wulf White
    I wish to add the ability to zoom-in, zoom-out, rotate and move the view in a top-down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details: Offset of the camera pov from the origin of the map is: Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is: alpha The scale is: S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement: I read in the AS3 Bible book that: In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean: http://wonderfl.net/c/eFp0/

    Read the article

  • Bullet physics debug drawing not working

    - by Krishnabhadra
    Background I am following on from this question, which isn't answered yet. Basically I have a cube and a UVSphere in my scene, with UVSphere on the top of the cube without touching the cube. Both exported from blender. When I run the app The UVSphere does circle around the cube for 3 or 4 times and jump out of the scene. What I actually expect was the sphere to fall on top of the cube. What this question about From the comment to the linked question, I got to know about bullet debug drawing, which helps in debugging by drawing outline of physics bodies which are normally invisible. I did some research on that and came up with the code given below. From whatever I have read, below code should work, but it doesn't. My Code My bullet initialization code. -(void) initializeScene { /*Setup physics world*/ _physicsWorld = [[CC3PhysicsWorld alloc] init]; [_physicsWorld setGravity:0 y:-9.8 z:0]; /*Setting up debug draw*/ MyDebugDraw *draw = new MyDebugDraw; draw->setDebugMode(draw->getDebugMode() | btIDebugDraw::DBG_DrawWireframe ); _physicsWorld._discreteDynamicsWorld->setDebugDrawer(draw); /*Setup camera and lamb*/ ………….. //This simpleCube.pod contains the cube [self addContentFromPODFile: @"simpleCube.pod"]; //This file contains sphere [self addContentFromPODFile: @"SimpleSphere.pod"]; [self createGLBuffers]; CC3MeshNode* cubeNode = (CC3MeshNode*)[self getNodeNamed:@"Cube"]; CC3MeshNode* sphereNode = (CC3MeshNode*)[self getNodeNamed:@"Sphere"]; // both cubeNode and sphereNode are not nil from this point float *cVertexData = (float*)((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertices; int cVertexCount = ((CC3VertexArrayMesh*)cubeNode.mesh) .vertexLocations.vertexCount; btTriangleMesh* cTriangleMesh = new btTriangleMesh(); int offset = 0; for (int i = 0; i < (cVertexCount / 3); i++) { unsigned int index1 = offset; unsigned int index2 = offset+6; unsigned int index3 = offset+12; cTriangleMesh->addTriangle( btVector3(cVertexData[index1], cVertexData[index1+1], cVertexData[index1+2]), btVector3(cVertexData[index2], cVertexData[index2+1], cVertexData[index2+2]), btVector3(cVertexData[index3], cVertexData[index3+1], cVertexData[index3+2])); offset += 18; } [self releaseRedundantData]; /*Create a triangle mesh from the vertices*/ btBvhTriangleMeshShape* cTriMeshShape = new btBvhTriangleMeshShape(cTriangleMesh,true); btCollisionShape *sphereShape = new btSphereShape(1); gTriMeshObject = [_physicsWorld createPhysicsObjectTrimesh:cubeNode shape:cTriMeshShape mass:0 restitution:1.0 position:cubeNode.location]; sphereObject = [_physicsWorld createPhysicsObject:sphereNode shape:sphereShape mass:1 restitution:0.1 position:sphereNode.location]; sphereObject.rigidBody->setDamping(0.1,0.8); /*Enable debug drawing*/ _physicsWorld._discreteDynamicsWorld->debugDrawWorld(); } And My btIDebugDraw implementation (MyDebugDraw.h) //MyDebugDraw.h class MyDebugDraw: public btIDebugDraw{ int m_debugMode; public: virtual void drawLine(const btVector3& from,const btVector3& to ,const btVector3& color); virtual void drawContactPoint(const btVector3& PointOnB ,const btVector3& normalOnB,btScalar distance ,int lifeTime,const btVector3& color); virtual void reportErrorWarning(const char* warningString); virtual void draw3dText(const btVector3& location ,const char* textString); virtual void setDebugMode(int debugMode); virtual int getDebugMode() const; }; void MyDebugDraw::drawLine(const btVector3& from,const btVector3& to ,const btVector3& color){ LogInfo(@"Works!!"); glPushMatrix(); glColor4f(color.getX(), color.getY(), color.getZ(), 1.0); const GLfloat line[] = { from.getX()*1, from.getY()*1, from.getZ()*1, //point A to.getX()*1, to.getY()*1,to.getZ()*1 //point B }; glVertexPointer( 3, GL_FLOAT, 0, &line ); glPointSize( 5.0f ); glDrawArrays( GL_POINTS, 0, 2 ); glDrawArrays( GL_LINES, 0, 2 ); glPopMatrix(); } void MyDebugDraw::drawContactPoint(const btVector3 &PointOnB ,const btVector3 &normalOnB, btScalar distance ,int lifeTime, const btVector3 &color){ } void MyDebugDraw::reportErrorWarning(const char *warningString){ } void MyDebugDraw::draw3dText(const btVector3 &location , const char *textString){ } void MyDebugDraw::setDebugMode(int debugMode){ } int MyDebugDraw::getDebugMode() const{ return DBG_DrawWireframe; } My Problem The drawLine method is getting called. I can see the cube and sphere in place. Sphere again does some circling around the cube before jumping off. No debug lines are getting drawn.

    Read the article

  • OpenGL ES - texture map all faces of an 8 vertex cube?

    - by Feet
    Working through some OpenGL-ES tutorials, using the Android emulator. I've gotten up to texture mapping and am having some trouble mapping to a cube. Is it possible to map a texture to all faces of a cube that has 8 vertices and 12 triangles for the 6 faces as described below? // Use half as we are going for a 0,0,0 centre. width /= 2; height /= 2; depth /= 2; float vertices[] = { -width, -height, depth, // 0 width, -height, depth, // 1 width, height, depth, // 2 -width, height, depth, // 3 -width, -height, -depth, // 4 width, -height, -depth, // 5 width, height, -depth, // 6 -width, height, -depth, // 7 }; short indices[] = { // Front 0,1,2, 0,2,3, // Back 5,4,7, 5,7,6, // Left 4,0,3, 4,3,7, // Right 1,5,6, 1,6,2, // Top 3,2,6, 3,6,7, // Bottom 4,5,1, 4,1,0, }; float texCoords[] = { 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; I have gotten the front and back faces working correctly, however, none of the other faces are showing the texture.

    Read the article

  • xml to hashmap - php

    - by csU
    Hi, Given an xml structure like this <gesmes:Envelope> <gesmes:subject>Reference rates</gesmes:subject> <gesmes:Sender> <gesmes:name>European Central Bank</gesmes:name> </gesmes:Sender> <Cube> <Cube time="2010-03-26"> <Cube currency="USD" rate="1.3353"/> <Cube currency="JPY" rate="124.00"/> <Cube currency="BGN" rate="1.9558"/> <Cube currency="CZK" rate="25.418"/> ... ... </Cube> </Cube> </gesmes:Envelope> how can i go about getting the values stored in to a hashmap or similar structure in php? Have been trying to do this for the last few hours now but cant manage it :D It is homework so i guess no full solutuins please( tho the actual assignment is to use the web services, i am just stuck with parsing it :D ). Maybe someone could show me a brief example for a made up xml file that i could apply to mine? Thanks

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • How do I install a Wimax usb driver?

    - by kakaz
    I am using wimax usb modem in Ubuntu 9.04 properly. I am familiar with Ubuntu 10.04 and try to install the same deb file to use my wimax USB modem, but it could not install and give me the following error message: $ sudo dpkg -i green-packet-wimax-usb_i386.iso.deb (Reading database ... 206628 files and directories currently installed.) Preparing to replace green-packet-wimax-usb 1.12 (using green-packet-wimax- usb_i386.iso.deb) ... /var/lib/dpkg/info/green-packet-wimax-usb.prerm: 45: /etc/init.d/wimaxd: not found Removing any system startup links for /etc/init.d/wimaxd ... FATAL: Module mt7118_usb_os not found. Unpacking replacement green-packet-wimax-usb ... Setting up green-packet-wimax-usb (1.12) ... FATAL: Error inserting mt7118_usb_glue (/lib/modules/2.6.32-28-generic/kernel/drivers/net/mt7118_usb_glue.ko): Invalid module format dpkg: error processing green-packet-wimax-usb (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for ureadahead ... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.utf8.cache... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Processing triggers for python-support ... Errors were encountered while processing: The error (Line 9) give me some clue that the mt7118_usb_glue.ko kernel object can't insert it. So, I think this may be due to it's kernel dependencies. Can anybody tell me how I can install this kernel object to my new Ubuntu 10.04 kernel?

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • How to handle multi-processing of libraries which already spawn sub-processes?

    - by exhuma
    I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed. Both, the library and script are modifiable by us. I believe the question is more about design than actual code, but for what it's worth, it's written in Python. The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout). One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device. So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere. The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device. Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned. So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n processes where n is the number of IPs to which he broadcasted. This has given us two problems: The host on which the command was executed slowed down to a near halt. Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296 The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations. My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution. Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes. p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • Windows AD: Is loopback processing absolutely necessary in order to apply a user policy to users logging into computers in the OU?

    - by Brett
    I've had our AD setup running on server 2008r2 and now 2012, and I swear, a user policy applied to an OU containing only computers actually does apply to users logging into those computers, without loopback processing enabled. Everything I read seems to say that is not how it should work, but it does. Is this normal behavior? Just tested again - created a policy with a drive map (which is a user policy), applied it to an OU containing my terminal server, forced a gpupdate, logged out/in, and sure enough, the drive is mapped. I did NOT turn on loopback processing.

    Read the article

  • How do you get an object associated with a Future Actor?

    - by Bruce Ferguson
    I would like to be able to get access to the object that is being returned from spawning a future import scala.actors.Future import scala.actors.Futures._ class Object1(i:Int) { def getAValue(): Int = {i} } object Test { def main( args: Array[String] ) = { var tests = List[Future[Object1]]() for(i <- 0 until 10) { val test = future { val obj1 = new Object1(i) println("Processing " + i + "...") Thread.sleep(1000) println("Processed " + i) obj1 } tests = tests ::: List(test) } val timeout = 1000 * 60 * 5 // wait up to 5 minutes val futureTests = awaitAll(timeout,tests: _*) futureTests.foreach(test => println("result: " + future())) } } The output from one run of this code is: Processing 0... Processing 1... Processing 2... Processing 3... Processed 0 Processing 4... Processed 1 Processing 5... Processed 2 Processing 6... Processed 3 Processing 7... Processed 4 Processing 8... Processed 6 Processing 9... Processed 5 Processed 7 Processed 8 Processed 9 result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> result: <function0> I've tried future().getClass(), and the output is result: class scala.actors.FutureActor What I'm looking to be able to access is the obj1 objects. Thanks Bruce

    Read the article

  • Where does form processing logic belong in a MVC web application?

    - by AdamTheHutt
    In a web-based application that uses the Model-View-Controller design pattern, the logic relating to processing form submissions seems to belong somewhere in between the Model layer and the Controller layer. This is especially true in the case of a complex form (i.e. where form processing goes well beyond simple CRUD operations). What's the best way to conceptualize this? Are forms simply a kind of glue between models and controllers? Or does form logic belong squarely in the M or C camp? EDIT: I understand the basic flow of information in an MVC application (see chills42's answer for a summary). My question is where the form processing logic belongs - in the controller, in the model, or somewhere else?

    Read the article

  • Upgrading PHP from 5.1 to 5.2 on CentOS 5.4

    - by andufo
    i'm trying to upgrade php 5.1 to 5.2 on a CentOS 5.4 I use: yum upgrade php The result is this (check out the last part): [root@mail httpd]# yum update php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirrors.serveraxis.net * centosplus: mirrors.tummy.com * contrib: mirror.raystedman.net * extras: mirror.raystedman.net * updates: mirrors.netdna.com Setting up Update Process Resolving Dependencies --> Running transaction check --> Processing Dependency: php = 5.1.6-27.el5 for package: php-devel --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-cli = 5.2.10-1.el5.centos for package: php --> Processing Dependency: php-common = 5.2.10-1.el5.centos for package: php --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-cli.x86_64 0:5.2.10-1.el5.centos set to be updated --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-xml --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-pdo --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-gd --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-ldap --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mbstring --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-mysql --> Processing Dependency: php-common = 5.1.6-27.el5 for package: php-imap ---> Package php-common.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-devel.x86_64 0:5.2.10-1.el5.centos set to be updated --> Running transaction check --> Processing Dependency: php = 5.1.6 for package: php-eaccelerator ---> Package php-gd.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-imap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-ldap.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mbstring.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-mysql.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-pdo.x86_64 0:5.2.10-1.el5.centos set to be updated ---> Package php-xml.x86_64 0:5.2.10-1.el5.centos set to be updated --> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 from installed has depsolving problems --> Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) Error: Missing Dependency: php = 5.1.6 is needed by package php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. [root@mail httpd]# What are the consequences of using --skip-broken? Any recommendations?

    Read the article

  • Why Swift is 100 times slower than C in this image processing test?

    - by xiaobai
    Like many other developers I have been very excited at the new Swift language from Apple. Apple has boasted its speed is faster than Objective C and can be used to write operating system. And from what I learned so far, it's a very type-safe language and able to have precisely control over the exact data type (like integer length). So it does look like having good potential handling performance critical tasks, like image processing, right? That's what I thought before I carried out a quick test. The result really surprised me. Here is a much simplified image alpha blending code snippet in C: test.c: #include <stdio.h> #include <stdint.h> #include <string.h> uint8_t pixels[640*480]; uint8_t alpha[640*480]; uint8_t blended[640*480]; void blend(uint8_t* px, uint8_t* al, uint8_t* result, int size) { for(int i=0; i<size; i++) { result[i] = (uint8_t)(((uint16_t)px[i]) *al[i] /255); } } int main(void) { memset(pixels, 128, 640*480); memset(alpha, 128, 640*480); memset(blended, 255, 640*480); // Test 10 frames for(int i=0; i<10; i++) { blend(pixels, alpha, blended, 640*480); } return 0; } I compiled it on my Macbook Air 2011 with the following command: gcc -O3 test.c -o test The 10 frame processing time is about 0.01s. In other words, it takes the C code 1ms to process one frame: $ time ./test real 0m0.010s user 0m0.006s sys 0m0.003s Then I have a Swift version of the same code: test.swift: let pixels = UInt8[](count: 640*480, repeatedValue: 128) let alpha = UInt8[](count: 640*480, repeatedValue: 128) let blended = UInt8[](count: 640*480, repeatedValue: 255) func blend(px: UInt8[], al: UInt8[], result: UInt8[], size: Int) { for(var i=0; i<size; i++) { var b = (UInt16)(px[i]) * (UInt16)(al[i]) result[i] = (UInt8)(b/255) } } for i in 0..10 { blend(pixels, alpha, blended, 640*480) } The build command line is: xcrun swift -O3 test.swift -o test Here I use the same O3 level optimization flag to make the comparison hopefully fair. However, the resulting speed is 100 time slower: $ time ./test real 0m1.172s user 0m1.146s sys 0m0.006s In other words, it takes Swift ~120ms to processing one frame which takes C just 1 ms. I also verified the memory initialization time in both test code are very small compared to the blend processing function time. What happened?

    Read the article

  • Using "Show values as" option in Excel 2007 pivot table when source is SSAS cube?

    - by Brad
    I have an Excel 2007 pivot table showing "Year" across the top and "Month" down the side. What I am trying to do is represent the values as "% Difference" from the same month of the previous year. (Ex. If Jan-07 is $100,000 and Jan-08 is $120,000, I would like Jan-08 to show '20%'). However, every time I try to do this (using the "Show values as" tab of Value Field Settings) all of my numbers go to '#N/A'. Is there a way to do this using an Analysis Services cube as the data source? When I do this exact same thing using data on a different sheet as the data source for the pivot table, it works fine. Thanks in advance for any insight into this.

    Read the article

  • In MAYA 2009, is it possible to capture the cube rotate event?

    - by Rahul2047
    I need to call a function ( Maya-Python ) based on cube rotationX. For that I have to capture the event, programmatically. I tried using while loop but It stucks in the loop, Nothing can be done in that time. I tried theading (python), still same. Can it be done this or other way? If yes, How? Maya 2009 in Windows XP Some failed code references: import maya.cmds as cmds while (count < 90): lock = cmds.getAttr('pCube1.rotateX',lock=False) print lock count = count + 1 Here Python wise: #!/usr/bin/python import thread import time # Define a function for the thread def cubeRotateX( threadName, delay): count = 0 while count < 5: time.sleep(delay) count += 1 try: thread.start_new_thread( cubeRotateX, ("Thread-1", 2, ) ) except: print "Error: unable to start thread" while 1: pass

    Read the article

  • Blender to Collada to Assimp - Rigid (Non-skinned) Animation

    - by gareththegeek
    I am trying to get simple animations to work, exporting from Blender and importing into my application. My first attempt was as follows: Open Blender at factory settings. Select the default cube and insert a location keyframe. Select another frame and move the cube. Insert a second location keyframe. Export to Collada. When I open the Collada file using assimp it contains zero animations, even though in Blender the cube animates correctly. On my next attempt, I inserted a bone armature with a single bone, made it the parent of the cube, and animated the bone instead. Again the animation worked correctly in Blender. Assimp now lists one animation but both key frames have the position [0, 0, 0] Does anyone have any suggestions as to how I can get animated (non-skinned) meshes from Blender into Assimp? My ultimate goal here is to export animated meshes from Blender, process them offline into my own model format, and load them into my SharpDX based graphics engine..

    Read the article

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >