Search Results

Search found 1266 results on 51 pages for 'shape'.

Page 26/51 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Is it possible to make JQuery keydown respond faster?

    - by Drew Paul
    I am writing a simple page with JQuery and HTML5 canvas tags where I move a shape on the canvas by pressing 'w' for up, 's' for down, 'a' for left, and 'd' for right. I have it all working, but I would like the shape to start moving at a constant speed upon striking a key. Right now there is some kind of hold period and then the movement starts. How can I get the movement to occur immediately? Here the important part of my code: Your browser does not support the HTML5 canvas tag. start navigating coords should pop up here key should pop up here var c=document.getElementById("myCanvas"); var ctx=c.getContext("2d"); //keypress movements var xtriggered = 0; var keys = {}; var north = -10; var east = 10; var flipednorth = 0; $(document).ready(function(e){ $("input").keydown(function(){ keys[event.which] = true; if (event.which == 13) { event.preventDefault(); } //press w for north if (event.which == 87) { north++; flipednorth--; } //press s for south if (event.which == 83) { north--; flipednorth++; } //press d for east if (event.which == 68) { east++; } //press a for west if (event.which == 65) { east--; } var msg = 'x: ' + flipednorth*5 + ' y: ' + east*5; ctx.beginPath(); ctx.arc(east*6,flipednorth*6,40,0,2*Math.PI); ctx.stroke(); $('#soul2').html(msg); $('#soul3').html(event.which ); $("input").css("background-color","#FFFFCC"); }); $("input").keyup(function(){ delete keys[event.which]; $("input").css("background-color","#D6D6FF"); }); }); </script> please let me know if I shouldn't be posting code this lengthy.

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • Do Silverlight APIs exist for diagramming?

    - by nw
    Do any Silverlight APIs exist to facilitate development of a custom browser-based diagramming app? It seems a shame to build something from scratch with shape primitives (such as this example), but I've searched Google and can't find much.

    Read the article

  • C++ Filling an 1D array to represent a n-dimensional object based on a straight line segment

    - by Ben
    I'm struggling to find a good way to put this question but here goes. I'm making a system that uses a 1D array implemented as double * parts_ = new double[some_variable];. I want to use this to hold co-ordinates for a particle system that can run in various dimensions. What I want to be able to do is write a generic fill algorithm for filling this in n-dimensions with a common increment in all direction to a variable size. Examples will serve best I think. Consider the case where the number of particles stored by the array is 4 In 1D this produces 4 elements in the array because each particle only has one co-ordinate. 1D: {0, 25, 50, 75}; In 2D this produces 8 elements in the array because each particle has two co-ordinates.. 2D: {0, 0, 0, 25, 25, 0, 25, 25} In 3D this produces 12 elements in the array because each particle now has three co-ordinates {0, 0, 0, 0, 0, 25, 0, 0, 50, ... } These examples are still not quite accurate, but they hopefully will suffice. The way I would do this normally for two dimensions: int i = 0; for(int x = 0; x < parts_size_ / dims_ / dims_ * 25; x += 25) { for(int y = 0; y < parts_size_ / dims_ / dims_ * 25; y += 25) { parts_[i] = x; parts_[i+1] = y; i+=2; // Indentation hates me today .< How can I implement this for n-dimensions where 25 can be any number? The straight line part is because it seems to me logical that a line is a somewhat regular shape in 1D, as is a square in 2D, and a cube in 3D. It seems to me that it would follow that there would be similar shapes in this family that could be implemented for 4D and higher dimensions via a similar fill pattern. This is the shape I wish to set my array to represent.

    Read the article

  • Set Round Border of an android TextView already having a background color

    - by vaibhav
    I want a TextView to have a rounded border. This can be done by using a drawable, specifying a shape in the drawable, and then using the drawable as the background of the TextView. android:background="@layout/border" Also shown here However, my TextView already has a background color (which is gray) and thus I'm unable to use the above method to set a rounded border. Is there any other method to do this which allows the background color of the TextView to remain gray and also surrounds it with a rounded border?

    Read the article

  • change background of an ImageView (the old image stays there!)

    - by nourdine
    how can I change the background of an ImageView from java? I have an ImageView and at a certain point I need to change the image that it displays (set in the styles). I tried to do it like this: placeHolder.setImageDrawable(myDrawb); but it looks like the old image remains there and it is partially covered but the new one (which in my case has different shape). hope you guys can help! cheers

    Read the article

  • Create photo collage with php script

    - by ToughPal
    Does anybody know of any php script / open source method of creating a photo collage like http://www.shapecollage.com/ Just a script to give a list of images and create something like this example http://www.shapecollage.com/collages/collage-popart.jpg no need of all that shape options.

    Read the article

  • LINQ-to-SQL eagerly load entire object graph

    - by Paddy
    I have a need to load an entire LINQ-to-SQL object graph from a certain point downwards, loading all child collections and the objects within them etc. This is going to be used to dump out the object structure and data to XML. Is there a way to do this without generating a large hard coded set of DataLoadOptions to 'shape' my data?

    Read the article

  • How to create a tags box like mixx & delicious?

    - by David
    i tried to search in google but no one talked about this. i want a css solution to create a liquid tag box like the orange ones in this : http://www.mixx.com/stories/10402914/haiti_us_gov_t_grants_matching_3_to_1_donations_to_worldvision_for_haiti so, even if the word is long the tag box will fit it. i want the same shape Thanks

    Read the article

  • how to get thumbnails of uploaded images?

    - by Bhushan Firake
    I am developing an image gallery plugin where in user can see the thumbnails of the uploaded pics and on hovering the NEXT and PREVIOUS arrows he can see the thumbnail of the respective images. Is it feasible to get the thumbnails of the pics automatically through code after been uploaded by admin? Suggest me the available libraries to get thumbnails of the pics where the size and shape of the pic being uploaded can be modified. The plugin should either be in C# or JQuery.

    Read the article

  • Reduce Triangle in DirectX 9

    - by Himadri
    Hello everyone, I have a 3D object with about 2000 or 3000 triangles. I want to reduce the number of triangle without affecting shape of object. for eg, I have two triangles, (1,1.5,2) (1.5,1.5,2) (1.7,2,2) (1.5,1.5,2) (1.7,2,2) (2,1.5,2) In this case these two triangle is same as a single triangle - (1,1.5,2) (2,1.5,2) (1.7,2,2) I dont want the manual method, But if there is any direct function or such thing which will reduce my triangle list. Thank You.

    Read the article

  • how I can convert the string output into data.frame

    - by user2968058
    i have a data set called SIZEDIST$AVG.µm., and i have fitted this data with a weibull curve. now i have generated the quantiles by using the quantile function and now i want to access the output of this function generated at the interval of p=0.01. fwbl<-fitdist(SIZEDIST$AVG.µm., "weibull",start=list(shape=0.8,scale=1)) fwbl quantwbl<-quantile(fwbl,probs=seq(.1,.99,.01)) quantwbl str(quantwbl) using str(quantwbl) i can visualize the output but i cant convert them into data.frame

    Read the article

  • How to insert an image into a PolygonMorph?

    - by hanneswurstes
    I need to get a texture into a PolygonMorph, but these seem to require an InfiniteForm as color/ filling. The InfiniteForm is no solution as i need to rotate the PolygonMorph later on and moving the PolygonMorph around also has sideeffects on the displayed texture. It would be very useful if it would be possible to scale the inserted texture as well. How would you do this without replacing the existing PolygonMorph (or at least keeping the PolygonMorph's shape)?

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • ¿Es más barato desarrollar a medida que adquirir un ERP?

    - by Luis Alberto Quilez
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} La clave está en el tiempo. Cuando abordamos un desarrollo a medida, estamos pensando únicamente en las necesidades de hoy. Tenemos un proyecto concreto, un determinado alcance funcional y conocemos las herramientas que hoy tenemos disponibles. Somos los que mejor conocemos nuestra empresa de hoy, sus procesos y el desarrollo parece una buena opción, pues las licencias de las herramientas de desarrollo son económicas y el coste de la tarifa diaria de programación es asequible, y entonces, caemos en la trampa del corto plazo y vamos adelante. Es muy posible que este desarrollo salga bien, que estemos orgullosos de nuestro trabajo, e incluso que proclamemos a los 4 vientos el dinero que nos hemos ahorrado. Sin embargo el mundo no se para, el negocio no se para, la adaptación debe ser permanente, nuestros clientes, internos y externos, tendrán nuevas exigencias y nuestro desarrollo no estará terminado, tendremos que integrarlo con otras áreas, tendremos que tratar de darle mayor funcionalidad y alcance, tendremos que adaptarlo a las nuevas tecnologías, permitir que la información se analice, se comparta, se acceda desde nuevos dispositivos … y veremos en primera persona cómo la trampa del desarrollo se cierra sobre nuestras cabezas, nunca estará terminado, la tecnología que usamos un día se quedará obsoleta, el ritmo de exigencia por funcionalidad e integración será cada vez mayor y no podremos sino poner más y más recursos dedicados al mantenimiento de un desarrollo propio, que no deja de comer, que me obliga a gastar más y más cada día y del que no puedo salir. Al poco tiempo me he convertido en una empresa de desarrollo de software dentro de mi propia empresa y ni tengo los recursos económicos para hacerlo viable, ni tengo las capacidades humanas y de inversión para responder a lo que se me exige desde el negocio. Así que pensemos, desde el principio, en que nuestra empresa debe perdurar muchos años, y hagamos el análisis de costes bajo esta perspectiva a la hora de tomar la decisión y veremos entonces que la adquisición de un ERP es mucho más económica que el desarrollo a medida. Por otro lado tenemos la integración. Un sistema de producción, requiere la asignación de recursos, que a su vez requieren de un plan de desarrollo, una formación o un cálculo de su nómina; también requiere de una cuenta contable, de una gestión de compras o de una asignación de costes y claro,de todos estos puntos nos vamos dando cuenta sobre la marcha, cuando en un sistema de gestión integral (ERP) lo tenemos disponible desde el primer momento. Claro que no nos vale un ERP cerrado, poco flexible y que no me permita diferenciar a mi empresa. Tenemos que buscar un socio tecnológico que nos acompañe, que asuma la inversión en tecnología y que me vaya suministrando versiones y soluciones acordes a las exigencias de los tiempos, de hoy y de mañana, pero además que me permita adaptar los flujos e innovar en los procesos para que podamos diferenciar nuestra empresa de la competencia, hoy y mañana. Veremos cómo, con la decisión de un ERP, flexible y abierto, los números salen y en el largo plazo es mucho más económica la decisión de adquirir un ERP que de optar por el desarrollo. Luis Alberto Quilez v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • AndEngine; Box2D - high speed body overlapping, prismatic joints

    - by Visher
    I'm trying to make good suspension for my car game, but I'm getting nervous of some problems with it. At the beginning, I've tried to make it out of one prismatic joint/revolute joint per one wheel only, but surprisingly prismatic joint that should only move in Y asix moves also in X axis, if car travels very fast, or even on low speeds if there's setContinuousPhysics = true. This causes wheels to "shift back", moving them away from axle. Now I've tried to add some bodies that will keep it in place: Suspension helper collides with spring only, wheel doesn't collide with spring&helper&vehicle body This is how I create those elements: rect = new Rectangle(1100, 1350, 200, 50, getVertexBufferObjectManager()); rect.setColor(Color.RED); scene.attachChild(rect); //rect.setRotation(90); Rectangle miniRect1 = new Rectangle(1102, 1355, 30, 50, getVertexBufferObjectManager()); miniRect1.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect1); Rectangle miniRect2 = new Rectangle(1268, 1355, 30, 50, getVertexBufferObjectManager()); miniRect2.setColor(0, 0, 1, 0.5f); miniRect1.setVisible(true); scene.attachChild(miniRect2); rectBody = PhysicsFactory.createBoxBody( physicsWorld, rect, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); rectBody.setUserData("car"); Body miniRect1Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect1Body.setUserData("suspension"); Body miniRect2Body = PhysicsFactory.createBoxBody( physicsWorld, miniRect2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 10.0f)); miniRect2Body.setUserData("suspension"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(rect, rectBody, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect1, miniRect1Body, true, true)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(miniRect2, miniRect2Body, true, true)); PrismaticJointDef miniRect1JointDef = new PrismaticJointDef(); miniRect1JointDef.initialize(rectBody, miniRect1Body, miniRect1Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect1JointDef.collideConnected = false; miniRect1JointDef.enableMotor= true; miniRect1JointDef.maxMotorForce = 15; miniRect1JointDef.motorSpeed = 5; miniRect1JointDef.enableLimit = true; physicsWorld.createJoint(miniRect1JointDef); PrismaticJointDef miniRect2JointDef = new PrismaticJointDef(); miniRect2JointDef.initialize(rectBody, miniRect2Body, miniRect2Body.getWorldCenter(), new Vector2(0.0f, 0.3f)); miniRect2JointDef.collideConnected = false; miniRect2JointDef.enableMotor= true; miniRect2JointDef.maxMotorForce = 15; miniRect2JointDef.motorSpeed = 5; miniRect2JointDef.enableLimit = true; physicsWorld.createJoint(miniRect2JointDef); scene.attachChild(karoseriaSprite); Rectangle r1 = new Rectangle(1050, 1300, 52, 150, getVertexBufferObjectManager()); r1.setColor(0, 1, 0, 0.5f); r1.setVisible(true); scene.attachChild(r1); Body r1body = PhysicsFactory.createBoxBody(physicsWorld, r1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r1body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r1, r1body, true, true)); WeldJointDef r1jointDef = new WeldJointDef(); r1jointDef.initialize(r1body, rectBody, r1body.getWorldCenter()); physicsWorld.createJoint(r1jointDef); Rectangle r2 = new Rectangle(1132, 1300, 136, 150, getVertexBufferObjectManager()); r2.setColor(0, 1, 0, 0.5f); r2.setVisible(true); scene.attachChild(r2); Body r2body = PhysicsFactory.createBoxBody(physicsWorld, r2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.001f, 0.01f)); r2body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r2, r2body, true, true)); WeldJointDef r2jointDef = new WeldJointDef(); r2jointDef.initialize(r2body, rectBody, r2body.getWorldCenter()); physicsWorld.createJoint(r2jointDef); Rectangle r3 = new Rectangle(1298, 1300, 50, 150, getVertexBufferObjectManager()); r3.setColor(0, 1, 0, 0.5f); r3.setVisible(true); scene.attachChild(r3); Body r3body = PhysicsFactory.createBoxBody(physicsWorld, r3, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(1f, 0.01f, 0.01f)); r3body.setUserData("suspensionHelper"); physicsWorld.registerPhysicsConnector(new PhysicsConnector(r3, r3body, true, true)); WeldJointDef r3jointDef = new WeldJointDef(); r3jointDef.initialize(r3body, rectBody, r3body.getWorldCenter()); physicsWorld.createJoint(r3jointDef); MouseJointDef md = new MouseJointDef(); Sprite wheel1 = new Sprite( miniRect1.getX()+miniRect1.getWidth()/2-wheelTexture.getWidth()/2, miniRect1.getY()+miniRect1.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel1); Body wheel1body = PhysicsFactory.createCircleBody( physicsWorld, wheel1, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel1body.setUserData("wheel"); Shape wheel1shape = wheel1body.getFixtureList().get(0).getShape(); wheel1shape.setRadius(wheel1shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel1, wheel1body, true, true)); Sprite wheel2 = new Sprite( miniRect2.getX()+miniRect2.getWidth()/2-wheelTexture.getWidth()/2, miniRect2.getY()+miniRect2.getHeight()-wheelTexture.getHeight()/2, wheelTexture, engine.getVertexBufferObjectManager()); scene.attachChild(wheel2); Body wheel2body = PhysicsFactory.createCircleBody( physicsWorld, wheel2, BodyDef.BodyType.DynamicBody, PhysicsFactory.createFixtureDef(10.0f, 0.01f, 5.0f)); wheel2body.setUserData("wheel"); Shape wheel2shape = wheel2body.getFixtureList().get(0).getShape(); wheel2shape.setRadius(wheel2shape.getRadius()*(3.0f/4.0f)); physicsWorld.registerPhysicsConnector(new PhysicsConnector(wheel2, wheel2body, true, true)); RevoluteJointDef frontWheelRevoluteJointDef = new RevoluteJointDef(); frontWheelRevoluteJointDef.initialize(wheel1body, miniRect1Body, wheel1body.getWorldCenter()); frontWheelRevoluteJointDef.collideConnected = false; RevoluteJointDef rearWheelRevoluteJointDef = new RevoluteJointDef(); rearWheelRevoluteJointDef.initialize(wheel2body, miniRect2Body, wheel2body.getWorldCenter()); rearWheelRevoluteJointDef.collideConnected = false; rearWheelRevoluteJointDef.motorSpeed = 2050; rearWheelRevoluteJointDef.maxMotorTorque= 3580; physicsWorld.createJoint(frontWheelRevoluteJointDef); Joint j = physicsWorld.createJoint(rearWheelRevoluteJointDef); rearWheelRevoluteJoint = (RevoluteJoint)j; r1body.setBullet(true); r2body.setBullet(true); r3body.setBullet(true); miniRect1Body.setBullet(true); miniRect2Body.setBullet(true); rectBody.setBullet(true); at low speeds, it's OK, but on high speed vehicle can even flip around on flat ground.. Is there a way to make this work better?

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >