Search Results

Search found 321 results on 13 pages for 'scalar'.

Page 5/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • LINQ - Querying a list filtered via a Many-to-Many reltionship

    - by user118190
    Please excuse the context of my question for I did not know how to exactly word it. To not complicate things further, here's my business requirement: "bring me back all the Employees where they belong in Department "X". So when I view this, it will display all of the Employees that belong to this Department. Here's my environment: Silverlight 3 with Entity Framework 1.0 and WCF Data Services 1.0. I am able to load and bind all kinds of lists (simple), no problem. I don't feel that my environment matters and that's why I feel it is a LINQ question more than the technologies. My question is for scenarios where I have 3 tables linked, i.e. entities (collections). For example, I have this in my EDM: Employee--EmployeeProject--Project. Here's the table design from the Database: Employee (table1) ------------- EmployeeID (PK) FirstName other Attributes ... EmployeeProject (table2) ------------- EmployeeProjectID (PK) EmployeeID (FK) ProjectID (FK) AssignedDate other Attributes ... Project (table3) ------------- ProjectID (PK) Name other Attributes ... Here's the EDM design from Entity Framework: ------------------------ Employee (entity1) ------------------------ (Scalar Properties) ------------------- EmployeeID (PK) FirstName other Attributes ... ------------------- (Navigation Properties) ------------------- EmployeeProjects ------------------------ EmployeeProject (entity2) ------------------------ (Scalar Properties) ------------------- EmployeeProjectID (PK) AssignedDate other Attributes ... ------------------- (Navigation Properties) ------------------- Employee Project ------------------------ Project (entity3) ------------------------ (Scalar Properties) ------------------- ProjectID (PK) Name other Attributes ... ------------------- (Navigation Properties) ------------------- EmployeeProjects So far, I have only been able to do: var filteredList = Context.Employees .Where(e => e.EmployeeProjects.Where(ep => ep.Project.Name == "ProjectX")) NOTE: I have updated the syntax of the query after John's post. As you can see, I can only query, the related entity (EmployeeProjects). All I want is being able to filter to Project from the Employee entity. Thanks for any advice.

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • Writing a program in C++ and I need help [migrated]

    - by compscinoob
    So I am a new to this. I am trying to write a program with a function double_product(vector< double a, vector< double b) that computes the scalar product of two vectors. The scalar product is $a_{0}b_{0}+a_{1}b_{1}+...a_{n-1}b_{n-1}$. Here is what I have. It is a mess, but I am trying! #include<iostream> #include<vector> using namespace std; class Scalar_product { public: Scalar_product(vector<double> a, vector<bouble> b); }; double scalar_product(vector<double> a, vector<double> b) { double product = 0; for (int i=0; i <=a.size()-1; i++) for (int i=0; i <=b.size()-1; i++) product = product + (a[i])*(b[i]); return product; } int main() { cout << product << endl; return 0; }

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • Iterating Oracle collections of objects with out exploding them

    - by Scott Bailey
    I'm using Oracle object data types to represent a timespan or period. And I've got to do a bunch of operations that involve working with collections of periods. Iterating over collections in SQL is significantly faster than in PL/SQL. CREATE TYPE PERIOD AS OBJECT ( beginning DATE, ending DATE, ... some member functions...); CREATE TYPE PERIOD_TABLE AS TABLE OF PERIOD; -- sample usage SELECT <<period object>>.contains(period2) FROM TABLE(period_table1) t The problem is that the TABLE() function explodes the objects into scalar values, and I really need the objects instead. I could use the scalar values to recreate the objects but this would incur the overhead of re-instantiating the objects. And the period is designed to be subclassed so there would be additional difficulty trying to figure out what to initialize it as. Is there another way to do this that doesn't destroy my objects?

    Read the article

  • Using Python tuples as vectors

    - by Etaoin
    I need to represent immutable vectors in Python ("vectors" as in linear algebra, not as in programming). The tuple seems like an obvious choice. The trouble is when I need to implement things like addition and scalar multiplication. If a and b are vectors, and c is a number, the best I can think of is this: tuple(map(lambda x,y: x + y, a, b)) # add vectors 'a' and 'b' tuple(map(lambda x: x * c, a)) # multiply vector 'a' by scalar 'c' which seems inelegant; there should be a clearer, simpler way to get this done -- not to mention avoiding the call to tuple, since map returns a list. Is there a better option?

    Read the article

  • When does a query/subquery return a NULL and when no value at all?

    - by AspOnMyNet
    a) If a query/subquery doesn’t find any matching rows, then it either returns NULL or no value at all, thus not even a NULL value. Based on what criteria does a query/subquery return a NULL and when doesn’t it return any results, not even a NULL value? b) I assume a scalar subquery will always return NULL, when no matching rows are found? I assume most-outer scalar query also returns NULL if no rows are found? c) SELECT FirstName, LastName, YEAR(BirthDate) FROM Persons WHERE YEAR(BirthDate) IN (SELECT YearReleased FROM Albums); If subquery finds no results, is then a WHERE clause of an outer query translated into WHERE YEAR(BirthDate) IN (null); ? If instead WHERE clause is translated into WHERE YEAR(BirthDate) IN(); then shouldn’t that be an error condition, since how can YEAR(BirthDate) value be compared to nothing? thanx

    Read the article

  • Is this a valid benefit of using embedded SQL over stored procedures?

    - by George
    Here's an argument for SPs that I haven't heard. Flamers, be gentle with the down tick, Since there is overhead associated with each trip to the database server, I would suggest that a POSSIBLE reason for placing your SQL in SPs over embedded code is that you are more insulated to change without taking a performance hit. For example. Let's say you need to perform Query A that returns a scalar integer. Then, later, the requirements change and you decide that it the results of the scalar is x that then, and only then, you need to perform another query. If you performed the first query in a SP, you could easily check the result of the first query and conditionally execute the 2nd SQL in the same SP. How would you do this efficiently in embedded SQL w/o perform a separate query or an unnecessary query? Here's an example: --This SP may return 1 or two queries. SELECT @CustCount = COUNT(*) FROM CUSTOMER IF @CustCount 10 SELECT * FROM PRODUCT Can this/what is the best way to do this in embedded SQL?

    Read the article

  • NASM shift operators

    - by Hudson Worden
    How would you go about doing a bit shift in NASM on a register? I read the manual and it only seems to mention these operators , <<. When I try to use them NASM complains about the shift operator working on scalar values. Can you explain what a scalar value is and give an example of how to use and <<. Also, I thought there were a shr or shl operators. If they do exist can you give an example of how to use them? Thank you for your time.

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • What types of objects are useful in SQL CLR?

    - by Greg Low
    I've had a number of people over the years ask about whether or not a particular type of object is a good candidate for SQL CLR integration. The rules that I normally apply are as follows: Database Object Transact-SQL Managed Code Scalar UDF Generally poor performance Good option when limited or no data-access Table-valued UDF Good option if data-related Good option when limited or no data-access Stored Procedure Good option Good option when external access is required or limited data access DML...(read more)

    Read the article

  • How to make sure you see the truth with Management Studio

    - by fatherjack
    LiveJournal Tags: TSQL,How To,SSMS,Tips and Tricks Did you know that SQL Server Management Studio can mislead you with how your code is performing? I found a query that was using a scalar function to return a date and wanted to take the opportunity to remove it in favour of a table valued function that would be more efficient. The original function was simply returning the start date of the current financial year. The code we were using was: ALTER  FUNCTION...(read more)

    Read the article

  • SQL Server Functions: The Basics

    SQL Server's functions are a valuable addition to T-SQL when used wisely. Jeremiah Peshcka provides a complete and comprehensive guide to scalar functions and table-valued functions, and shows how and where they are best used. The Future of SQL Server Monitoring "Being web-based, SQL Monitor enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Rotate triangle so that its tip points in the direction of the point on the screen that we last touched

    - by Sid
    OpenGL ES - Android. Hello all, I am unable to rotate the triangle accordingly in such a way that its tip always points to my finger. What i did : Constructed a triangle in by GL.GL_TRIANGLES. Added touch events to it. I can rotate the triangle along my Z-axis successfully. Even made the vector class for it. What i need : Each time when I touch the screen, I want to rotate the triangle to face the touch point. Need some help. Here's what i implemented. I wonder that where i am going wrong? My code : public class Graphic2DTriangle { private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private float[] vertices = { -1.0f,-1.0f, 0.0f, 2.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f }; private byte[] indices = { 0, 1, 2 }; public Graphic2DTriangle() { ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert byte buffer to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind // Setup index-array buffer. Indices in byte. indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } My SurfaceView class where i've done some Touch Events. public class BallThrowGLSurfaceView extends GLSurfaceView{ MySquareRender _renderObj; View _viewObj; float oldX,oldY,dX,dY; final float TOUCH_SCALE_FACTOR = 0.6f; Vector2 touchPos = new Vector2(); float angle=0; public BallThrowGLSurfaceView(Context context) { super(context); // TODO Auto-generated constructor stub _renderObj = new MySquareRender(context); this.setRenderer(_renderObj); this.setRenderMode(RENDERMODE_WHEN_DIRTY); } @Override public boolean onTouchEvent(MotionEvent event) { // TODO Auto-generated method stub touchPos.x = event.getX(); touchPos.y = event.getY(); Log.i("Co-ord", touchPos.x+"hh"+touchPos.y); switch(event.getAction()){ case MotionEvent.ACTION_MOVE : dX = touchPos.x - oldX; dY = touchPos.y - oldY; if(touchPos.y > getHeight()/2){ dX = dX*-1; } if(touchPos.x < getWidth()/2){ dY = dY*-1; } _renderObj.mAngle += (dX+dY) * TOUCH_SCALE_FACTOR; requestRender(); Log.i("AngleCo-ord", _renderObj.mAngle +"hh"); } oldX = touchPos.x; oldY = touchPos.y; Log.i("OldCo-ord", oldX+" hh "+oldY); return true; } } Last but not the least. My vector2 class. public class Vector2 { public static float TO_RADIANS = (1 / 180.0f) * (float) Math.PI; public static float TO_DEGREES = (1 / (float) Math.PI) * 180; public float x, y; public Vector2() { } public Vector2(float x, float y) { this.x = x; this.y = y; } public Vector2(Vector2 other) { this.x = other.x; this.y = other.y; } public Vector2 cpy() { return new Vector2(x, y); } public Vector2 set(float x, float y) { this.x = x; this.y = y; return this; } public Vector2 set(Vector2 other) { this.x = other.x; this.y = other.y; return this; } public Vector2 add(float x, float y) { this.x += x; this.y += y; return this; } public Vector2 add(Vector2 other) { this.x += other.x; this.y += other.y; return this; } public Vector2 sub(float x, float y) { this.x -= x; this.y -= y; return this; } public Vector2 sub(Vector2 other) { this.x -= other.x; this.y -= other.y; return this; } public Vector2 mul(float scalar) { this.x *= scalar; this.y *= scalar; return this; } public float len() { return FloatMath.sqrt(x * x + y * y); } public Vector2 nor() { float len = len(); if (len != 0) { this.x /= len; this.y /= len; } return this; } public float angle() { float angle = (float) Math.atan2(y, x) * TO_DEGREES; if (angle < 0) angle += 360; return angle; } public Vector2 rotate(float angle) { float rad = angle * TO_RADIANS; float cos = FloatMath.cos(rad); float sin = FloatMath.sin(rad); float newX = this.x * cos - this.y * sin; float newY = this.x * sin + this.y * cos; this.x = newX; this.y = newY; return this; } public float dist(Vector2 other) { float distX = this.x - other.x; float distY = this.y - other.y; return FloatMath.sqrt(distX * distX + distY * distY); } public float dist(float x, float y) { float distX = this.x - x; float distY = this.y - y; return FloatMath.sqrt(distX * distX + distY * distY); } public float distSquared(Vector2 other) { float distX = this.x - other.x; float distY = this.y - other.y; return distX * distX + distY * distY; } public float distSquared(float x, float y) { float distX = this.x - x; float distY = this.y - y; return distX * distX + distY * distY; } } PS : i am able to handle the touch events. I can rotate the triangle with the touch of my finger. But i want that ONE VERTEX of the triangle should point at my finger position respective of the position of my finger.

    Read the article

  • VFP Unit Matrix Multiply problem on the iPhone

    - by Ian Copland
    Hi. I'm trying to write a Matrix3x3 multiply using the Vector Floating Point on the iPhone, however i'm encountering some problems. This is my first attempt at writing any ARM assembly, so it could be a faily simple solution that i'm not seeing. I've currently got a small application running using a maths library that i've written. I'm investigating into the benifits using the Vector Floating Point Unit would provide so i've taken my matrix multiply and converted it to asm. Previously the application would run without a problem, however now my objects will all randomly disappear. This seems to be caused by the results from my matrix multiply becoming NAN at some point. Heres the code IMatrix3x3 operator*(IMatrix3x3 & _A, IMatrix3x3 & _B) { IMatrix3x3 C; //C++ code for the simulator #if TARGET_IPHONE_SIMULATOR == true C.A0 = _A.A0 * _B.A0 + _A.A1 * _B.B0 + _A.A2 * _B.C0; C.A1 = _A.A0 * _B.A1 + _A.A1 * _B.B1 + _A.A2 * _B.C1; C.A2 = _A.A0 * _B.A2 + _A.A1 * _B.B2 + _A.A2 * _B.C2; C.B0 = _A.B0 * _B.A0 + _A.B1 * _B.B0 + _A.B2 * _B.C0; C.B1 = _A.B0 * _B.A1 + _A.B1 * _B.B1 + _A.B2 * _B.C1; C.B2 = _A.B0 * _B.A2 + _A.B1 * _B.B2 + _A.B2 * _B.C2; C.C0 = _A.C0 * _B.A0 + _A.C1 * _B.B0 + _A.C2 * _B.C0; C.C1 = _A.C0 * _B.A1 + _A.C1 * _B.B1 + _A.C2 * _B.C1; C.C2 = _A.C0 * _B.A2 + _A.C1 * _B.B2 + _A.C2 * _B.C2; //VPU ARM asm for the device #else //create a pointer to the Matrices IMatrix3x3 * pA = &_A; IMatrix3x3 * pB = &_B; IMatrix3x3 * pC = &C; //asm code asm volatile( //turn on a vector depth of 3 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00020000 \n\t" "fmxr fpscr, r0 \n\t" //load matrix B into the vector bank "fldmias %1, {s8-s16} \n\t" //load the first row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.A0, C.A1 and C.A2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the second row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.B0, C.B1 and C.B2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the third row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.C0, C.C1 and C.C2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //set the vector depth back to 1 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00000000 \n\t" "fmxr fpscr, r0 \n\t" //pass the inputs and set the clobber list : "+r"(pA), "+r"(pB), "+r" (pC) : :"cc", "memory","s0", "s1", "s2", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19" ); #endif return C; } As far as i can see that makes sence. While debugging i've managed to notice that if i were to say _A = C prior to the return and after the ASM, _A will not necessarily be equal to C which has only increased my confusion. I had thought it was possibly due to the pointers I'm giving to the VFPU being incrimented by lines such as "fldmias %0!, {s0-s2} \n\t" however my understanding of asm is not good enough to properly understand the problem, nor to see an alternative approach to that line of code. Anyway, I was hoping someone with a greater understanding than me would be able to see a solution, and any help would be greatly appreciated, thank you :-)

    Read the article

  • Picking good first estimates for Goldschmidt division

    - by Mads Elvheim
    I'm calculating fixedpoint reciprocals in Q22.10 with Goldschmidt division for use in my software rasterizer on ARM. This is done by just setting the nominator to 1, i.e the nominator becomes the scalar on the first iteration. To be honest, I'm kind of following the wikipedia algorithm blindly here. The article says that if the denominator is scaled in the half-open range (0.5, 1.0], a good first estimate can be based on the denominator alone: Let F be the estimated scalar and D be the denominator, then F = 2 - D. But when doing this, I lose a lot of precision. Say if I want to find the reciprocal of 512.00002f. In order to scale the number down, I lose 10 bits of precision in the fraction part, which is shifted out. So, my questions are: Is there a way to pick a better estimate which does not require normalization? Also, is it possible to pre-calculate the first estimates so the series converges faster? Right now, it converges after the 4th iteration on average. On ARM this is about ~50 cycles worst case, and that's not taking emulation of clz/bsr into account, nor memory lookups. Here is my testcase. Note: The software implementation of clz on line 13 is from my post here. You can replace it with an intrinsic if you want. #include <stdio.h> #include <stdint.h> const unsigned int BASE = 22ULL; static unsigned int divfp(unsigned int val, int* iter) { /* Nominator, denominator, estimate scalar and previous denominator */ unsigned long long N,D,F, DPREV; int bitpos; *iter = 1; D = val; /* Get the shift amount + is right-shift, - is left-shift. */ bitpos = 31 - clz(val) - BASE; /* Normalize into the half-range (0.5, 1.0] */ if(0 < bitpos) D >>= bitpos; else D <<= (-bitpos); /* (FNi / FDi) == (FN(i+1) / FD(i+1)) */ /* F = 2 - D */ F = (2ULL<<BASE) - D; /* N = F for the first iteration, because the nominator is simply 1. So don't waste a 64-bit UMULL on a multiply with 1 */ N = F; D = ((unsigned long long)D*F)>>BASE; while(1){ DPREV = D; F = (2<<(BASE)) - D; D = ((unsigned long long)D*F)>>BASE; /* Bail when we get the same value for two denominators in a row. This means that the error is too small to make any further progress. */ if(D == DPREV) break; N = ((unsigned long long)N*F)>>BASE; *iter = *iter + 1; } if(0 < bitpos) N >>= bitpos; else N <<= (-bitpos); return N; } int main(int argc, char* argv[]) { double fv, fa; int iter; unsigned int D, result; sscanf(argv[1], "%lf", &fv); D = fv*(double)(1<<BASE); result = divfp(D, &iter); fa = (double)result / (double)(1UL << BASE); printf("Value: %8.8lf 1/value: %8.8lf FP value: 0x%.8X\n", fv, fa, result); printf("iteration: %d\n",iter); return 0; }

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • How do I find, count, and display unique elements of an array using Perl?

    - by Luke
    I am a novice Perl programmer and would like some help. I have an array list that I am trying to split each element based on the pipe into two scalar elements. From there I would like to spike out only the lines that read ‘PJ RER Apts to Share’ as the first element. Then I want to print out the second element only once while counting each time the element appears. I wrote the piece of code below but can’t figure out where I am going wrong. It might be something small that I am just overlooking. Any help would be greatly appreciated. ## CODE ## my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if (($fields[0] =~ /PJ RER Apts to Share/g)){ @array2 = $fields[1]; my %seen; my @uniq = grep { ! $seen{$_}++ } @array2; my $count2 = scalar(@uniq); print "$array2[0] ($count2)","\n" } } print "$count","\n"; ## OUTPUT ## PROVIDENCE (1) JOHNSTON (1) JOHNSTON (1) JOHNSTON (1) 6

    Read the article

  • Ruby Doesn't Recognize Alias Method

    - by Jesse J
    I'm trying to debug someone else's code and having trouble figuring out what's wrong. When I run rake, one of the errors I get is: 2) Error: test_math(TestRubyUnits): NoMethodError: undefined method `unit_sin' for CMath:Module /home/user/ruby-units/lib/ruby_units/math.rb:21:in `sin' This is the function that calls the method: assert_equal Math.sin(pi), Math.sin("180 deg".unit) And this is what the class looks like: module Math alias unit_sin sin def sin(n) Unit === n ? unit_sin(n.to('radian').scalar) : unit_sin(n) end alias unit_cos cos def cos(n) Unit === n ? unit_cos(n.to('radian').scalar) : unit_cos(n) end ... module_function :unit_sin module_function :sin module_function :unit_cos module_function :cos ... end (The ellipsis means "more of the same"). As far as I can see, this is valid Ruby code. Is there something I'm missing here that's causing the error, or could the error be coming from something else? Update: I'm wondering if the problem has to do with namespaces. This code is attempting to extend CMath, so perhaps the alias and/or module_function isn't actually getting into CMath, or something like that....

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >