Search Results

Search found 1071 results on 43 pages for 'integers'.

Page 42/43 | < Previous Page | 38 39 40 41 42 43  | Next Page >

  • CodePlex Daily Summary for Tuesday, July 29, 2014

    CodePlex Daily Summary for Tuesday, July 29, 2014Popular ReleasesArtezio SharePoint 2013 Workflow Activities: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityDatabase Schema Reader: v1.3.4.0: Contents DatabaseSchemaReader.dll - Class library (.net3.5) DatabaseSchemaViewer.exe - UI to read and view database schemas; options to generate SQL and code; option to compare another schema CopyToSQLite/CopyToSQLite.exe - UI to copy any database schema and data to SQLite or, if installed, SQL Server CE 4.0 net4/DatabaseSchemaReader.dll - .Net 4.0 class library This Release Schema Reading (MySQL)- reads UNSIGNED integers (available in the schema model's DatabaseColumn.DbDataType prope...TEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleQ2Cue \\: Q2Cue v. 0.85: Initial release.CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...NRepository: NRepository 2.2: NuGetNRepository packages are available from NuGet.org: NRepository.Core: Install-Package NRepository.Core NRepository.EntityFramework: Install-Package NRepository.EntityFramework NRepository.TestKit: Install-Package NRepository.TestKit Contoso University Sample Projects Added 3 versions of Contoso University sample code added: ContosoUniversityOriginal - This is the original Microsoft sample project for Entity Framework. ContosoUniversityUsingNRepository - This version simply repl...InstagramSaver: 1.5: Added: An option to wait between downloads to prevent ban from server Improved: File downloading performance Improved: File checking performancePVReportDeployer: PVReportDeployer 1.0: PVReportDeployer 1.0AD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.20: AD4.Iteration.23.20(Advanced Rendering Features) Option to render name of flow pins within flow chart (AlarmClockSample.10.ad4 used to test this extension): MainWindow extended by adding FlowChartShowFlowPinCaptionsComboBox ConfigureUIControls customized ManageFlowChartShowFlowPinCaptionsOptionFlow customized Parameter FlowChartFlowPins replaced FlowChartFlowPinCaptions in AD4.AppDesigner.cfg RenderFlowPinsCaptions customized to use the parameter ToDo: Some tutorials are unfinished b...CRM 2013 One Click Navigation: CRM2013OneClickNavigation_1_0_0_3_managed.zip: Fixes on Security PrivilegesAutomatedLab: AutomatedLab 2.1.0.0: 2.1.0 Support for external virtual switches CaRoot is a new role for installing Root Certificate Authorities Root domain controllers are installed in parallel now 2.0.0 Now supports also Windows Server 2008 R2 and 7 No longer the limit of just having one client and one server operating system Rewrite of the code that handles VHDs Lots of bug fixing Changed that parameter definition of some cmdlets in AutomatedLabDefinition and also adapted the sample scripts Now the Forest...BizTalk Deployment Utilities: 1.0.0.0: Draft VersionGroupMe Software Development Kit: GroupMe .NET SDK v1.0.1: Small update concerning the /users/me endpoint to get the currently authenticated user info. Documentation is available here: http://dotnetcorner.ch/Projects/GroupMe-NET-SDK/DocumentationOneNote Tagging Kit: OneNoteTaggingKit Add-In 2.4.5316.33833: New Features in this ReleaseThe Find Pages dialog preserves search string and tag filter selection on search scope changes Other ChangesDialog windows restored when action button is pressed while windows is minimized Code cleanup and simplification Code documentation Created some reusable UI controls Tags styling made more pleasant (I hope) Tag tooltips improved on the Find Pages dialog : Added dedicated tooltips to the page count and selection indicator of tags Closing open dial...New ProjectsCognitum ASP.NET Providers for Cassandra: Cassandra Providers is a ASP.NET solution that uses the Cassandra Database data source for a custom Membership, Role, Profile, an Session-State providers.DG Mobile: Simples aplicação para inserção, visualização e emissão de relatório de atividades relacionadas ao DG.Dynamics AX Development tools: Several AX Development related tools combined into one configurable packages: TFS Workspace, DEV_Tools, and X++ Editor components.E Drawing Library for WinRT: E Drawing Library for WinRT lets you create graphic rich Windows Store Apps easily, giving you all the power of Direct2D with simple classes and methods.EFramework.DataAccess: Summary ????freetalkserver: This php server, user can register!gicon: gicon is a project to help you to generate avatar/icon by hash (MD5, SHA1) or random hex string (such as GUID)hOOt - Full text search: Full text search engine built from scratch using bitmap indexesMelodyUI - Library for HTML, CSS & JS Development: MelodyUI is a set of classes and helpers that let us to create Web Applications. MelodyUI is lightweight and don't have any dependency.nRF24L01Plus .NETMF driver: nRF24L01, .NET MFNyan: Beginning...Prince Game Xna: Prince of Persia Net Game in Xna Game StudioSmartsby: Combining several small business web app's information into a single dashboard.Super snippets: Super snippets para Visual StudioTCP/IP Adapter BizTalk 2013: BizTalk 2013 TCP/IP community adapter. This is an updated version of the TCP/IP adapter for BizTalk, for use with BizTalk 2013.testMercurial: summaryUnit OF Work With Sql Helper: This project is intended to provide a simple data access object for SQL Server with repository and Unit of Work pattern.VisualRegEx: Prototype. Experimenting regex parsing, graph layout, etc.WDK Hardware Development Boards Add-On: Scripts for working with windows compatible hardware development boards like the Sharks Cove?????? 2014????????: ???????????????、????、????、??????、????、????????,????????、?????????,?????。?????? 2014????????: ????????????,????,??????????? ???? ???? ?????????,???,??,?????!???? 2014????????: ?????????????、????、????、??????、????、???????,?????,?????????!????? 2014???????: ????????????????,?????????????。????????????,???????,???????,?????,?????????? 2014????????: ?????????????????、????、??????、????????,????????????,???????????!??????? 2014???????: ??????????????????,??????????????、???????、???????、???????、?????!??????? 2014????????: ?????????????????????,????????????????,????????????????,??????!????? 2014????????: ?????????????????????????,???????????,????????,??????????????????????????? 2014????????: ??????????????????????,????“???????,???????”?????,????????????!?????? 2014???????: ?????????????????,????,????,??????,????“????、????、????、????”????????,??????.?????? 2014????????: ?????????????????"????,????"???,????????????????????????,??????????????。?????? 2014????????: ??????????????:????,????,????,???????,????????,??????:????????,?????!???? 2014????????: ????????????????????????,???????????????,????????????????!???? 2014?????????: ??????????????????,?????????????,???????????.????????????,????????????!?????? 2014??????: ??????????????:????,????,????,???????,????????,??????:????????,?????!?????? 2014??????: ?????????????????,???????????????。?????????????,???????,?????????。????? 2014???????: ????????????????"????,????"???,????????????????????????,??????????????。????? 2014??????: ???????????????????,?????????????,???????????.????????????,????????????!????? 2014???????: ?????????????????????????,???????????????,????????????????!?????? ????????: ??????????????300??,????????、???????、????、????????、?????,??????:????,????,???????!?????? ????????: ??????????????????,?????,???,???、???、?????,???,?????,??????????????????? ???????????: ??????????????????,?????????。????????????,??????,????,????,?????????!????? ???????: ????????????????????????,?????、???、????,????,???、???、???、???、???,????,?????!????? ???????: ????????????????????,????????,????????,????,????,??????,???????!????? ???????: ???????????????,??????????????????,????????,??????????????、????。?????? 2014????????: ????????????????????,????????:??、??、???,?????????????????????!?????? ????????: ????????????????6?,???????????????????????????,??????????????,?????????!?????? ??????: ????????????????8?,????????,????????,??????????,?????,????? ,????????!?????? ???????: ?????????????????????????????、????、????、???????????,????,????!????? 2014???????: ???????????、???、??、??????????????????????????????,????????????????!?????? 2014????????: ????????????????????????、??????,????、?????、????, ?????????,?????????????!?????? ????????: ?????????????????????、????、????、??????、???????,??????、??????。?????? ???????: ??????????????????,???????、????、????、??????、???????,??????,???????????。???? ????????: ??????,??????,?????????????????????,???????????????????????。???? ???????: ??????????????,???????、???????????,????????,????,?????????,??????,??????!?????? ????????: ?????????????????,?????????????。?????????,???????,???????,?????????????。?????? ????????: ????????????????、??????、??????、??????、??????、?????、??????、????、????、????????!????? ???????: ???????????????????,??????????????,???????????,??????,????,??????,??????。????? ???????: ?????????????!???????????,??????,????,?????????????,????????,??????,????,?????...?????? ???????: ????????????、??、???????????,??????,????????,??????????????????...?????????? ????????: ??????????????,??????????????,?????????,????,????,??????。???? ?????: ????????????、?????????,?????????,????,????????,????????????????!????? 2014??????????: ???????????????????,?????,???????,???????????,??????! ????? 2014?????????: ???????????????????,?????????????????????,?????,????,???????.

    Read the article

  • Help to solve "Robbery Problem"

    - by peiska
    Hello, Can anybody help me with this problem in C or Java? The problem is taken from here: http://acm.pku.edu.cn/JudgeOnline/problem?id=1104 Inspector Robstop is very angry. Last night, a bank has been robbed and the robber has not been caught. And this happened already for the third time this year, even though he did everything in his power to stop the robber: as quickly as possible, all roads leading out of the city were blocked, making it impossible for the robber to escape. Then, the inspector asked all the people in the city to watch out for the robber, but the only messages he got were of the form "We don't see him." But this time, he has had enough! Inspector Robstop decides to analyze how the robber could have escaped. To do that, he asks you to write a program which takes all the information the inspector could get about the robber in order to find out where the robber has been at which time. Coincidentally, the city in which the bank was robbed has a rectangular shape. The roads leaving the city are blocked for a certain period of time t, and during that time, several observations of the form "The robber isn't in the rectangle Ri at time ti" are reported. Assuming that the robber can move at most one unit per time step, your program must try to find the exact position of the robber at each time step. Input The input contains the description of several robberies. The first line of each description consists of three numbers W, H, t (1 <= W,H,t <= 100) where W is the width, H the height of the city and t is the time during which the city is locked. The next contains a single integer n (0 <= n <= 100), the number of messages the inspector received. The next n lines (one for each of the messages) consist of five integers ti, Li, Ti, Ri, Bi each. The integer ti is the time at which the observation has been made (1 <= ti <= t), and Li, Ti, Ri, Bi are the left, top, right and bottom respectively of the (rectangular) area which has been observed. (1 <= Li <= Ri <= W, 1 <= Ti <= Bi <= H; the point (1, 1) is the upper left hand corner, and (W, H) is the lower right hand corner of the city.) The messages mean that the robber was not in the given rectangle at time ti. The input is terminated by a test case starting with W = H = t = 0. This case should not be processed. Output For each robbery, first output the line "Robbery #k:", where k is the number of the robbery. Then, there are three possibilities: If it is impossible that the robber is still in the city considering the messages, output the line "The robber has escaped." In all other cases, assume that the robber really is in the city. Output one line of the form "Time step : The robber has been at x,y." for each time step, in which the exact location can be deduced. (x and y are the column resp. row of the robber in time step .) Output these lines ordered by time . If nothing can be deduced, output the line "Nothing known." and hope that the inspector will not get even more angry. Output a blank line after each processed case.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • CodePlex Daily Summary for Sunday, October 21, 2012

    CodePlex Daily Summary for Sunday, October 21, 2012Popular ReleasesBlogEngine.NET: BlogEngine.NET 2.7 RC: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! dot This is a Release Candidate version for BlogEngine.NET 2.7. The most current, stable version of BlogEngine.NET is version 2.6. Find out more about the BlogEngine.NET 2.7 RC here. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions...Pulse: Pulse 0.6.3.0: Fixed a number of bugs that showed up since my update yesterday. Fixes included are for: - Weird issue where the initial "Nature" wallbase.cc search would duplicate itself - After changing a providers settings it wouldn't take affect until you restarted Pulse (removing or adding a provider entirely did take effect though) - Another small issue with the regex for the wallbase.cc wallpapers that I tweaked yesterday, seems good now though.Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsMCEBuddy 2.x: MCEBuddy 2.3.3: 1. MCEBuddy now supports PIPE (2.2.15 style) and the newer remote TCP communication. This is to solve problems with faulty Ceton network drivers and some issues with older system related to load. When using LOCALHOST, MCEBuddy uses PIPE communication otherwise it uses TCP based communication. 2. UPnP is now disabled by Default since it interferes with some TV Tuner cards (CETON) that represent themselves as Network devices (bad drivers). Also as a security measure to avoid external connection...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".Restful Objects for .NET: Restful Objects Server 1.0.1: Version 1.0.1 is a bug fix release - fixing bug #55 - a failure to conform to the Restful Objects spec (v.1.0.0) for the Parameters property on an Action Representation. Please note that the easiest way to use Restful Objects for .NET is as NuGet Packages: search the NuGet Public Gallery for 'restfulobjects'. It is only necessary to download the source (from here) if you wish to build and/or modify the framework yourself.Extensions.js: Extensions.js 0.8.3.6 (Release): Extensions.js provides type extensions to facilitate working with javascript objects in a style familiar to C# programmers.PdfReport: PdfReport 1.2: - Added navigation/nested properties support to StronglyTypedList DataSource. - Moved watermark location to the top layer. - Fixed grouping issue in multi column reports. - Fixed a typo, Pervious to Previous! - Added more than 25 samples. you can download them from the "source code" tab: http://pdfreport.codeplex.com/SourceControl/BrowseLatest - Added NuGet Package: http://nuget.org/packages/PdfReport/Merge PDF: MergePDF 1.0 Released: MergePDF 1.0 Released40FINGERS DotNetNuke StyleHelper: 40FINGERS StyleHelper Skin Object 02.06.04: Version 02.06.04:Bug Fix SuperUser Detection Passing IfRole="SuperUsers" did not detect Host users This has been corrected now and the code has been rewritten. New Attribute ContentFalse This is the content that gets injected when the conditions Version 02.06.03:Changed IfQs behavior: IfQs also to test if a query String Parameter exists You can now pass a QS paramter without value Where IfQS="ProductId:122" would test for a QS parameter ProductId with value 122 IfQS="ProductId" allows you t...Display attachments (list view) SP 2010: Display attachments (in list view) 1.0.0: Version 1.0.0: Display attachments for list item in list view Async loading attachments using library jQuery 1.8.2 Use sharepoint webservice (/vtibin/Lists.asmx) Simple in use Simple installation Localized: English RussianCODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.New Projectsa new super fast css3 selector engine: kquery - A Super Fast And Compatible Css3 Selector Engine.AcfunWP: Acfun for Windows Phone??????MIT??????????,???????????Windows Phone?????????????????????。AdRotator v2: A highly customizable ad rotator component for Windows Phone and Windows 8 platforms, to be used with Silverlight, XNA and Monogame.BackUpCostaRicaProject: SumaryClickOnceTest: projekatCloudClipboardSync: Ha egy felhasználó eszközei közötti kommunikációról van szó, akkor a Dropbox és hasonló fájlszinkronizációs szolgáltatások felhasználhatók, mint korlátozott átvCodeplexTest: Enter two numbers to get the sum of them.cosuagwusumofnumbers: cosuagwu's sum of numberDaf Yomi WP7 App: Daf Yomi is a Windows Phone 7.5 application that let you listen to current Daf Yomi content from www.daf-yomi.com.Doctor Reg: Doctor Regfelixsumofnumbers: task1: getting two numbers from a user and calculating the sumFoxOS: La Volpe nel tuo osGanagro Lite: Windows forms application for handling grass-fed bull raising operations. Uses .net 3.5 and sql server (2005 or later). Written in c#, localized in spanish.GSISW8: ??a Windows Store efa?µ??? ? ?p??a pa???e? ?as??? St???e?a ??a ?? F?s??? ???s?pa ?a? F?s??? ???s?pa ?p?t?de?µat?e?, µ?s? t?? ???s?? t?? a????t?? Web Service t??JavaScript Calculator: ajogjoohon: Ua ua auaKRATOS: Kratos, the personification of power.Logical Disk Indicator: Logical Disk Indicator is a tool to monitor logical disk activity in notification area. Visual Basic.NET and .Net 2.0Media Organiser: This project aims to provide a tool that allows you not only to overview your media collection but also reorganize it following specific rules you can definePolymorphGame: A University project created in XNA integrating farseer physics engine. Contains some bugs and the code is not of the cleanest. Comments and critics welcome!qp: ????????? ??? ??????? ???? ?????? ????????? ? ???????????RAIP (Resonance Assignment by Integer Linear Programming): In progress...SanguoshaCardsCounter: SanguoshaCardsCounter??????????????????????????????。 ?????Microsoft Visual Studio 2012????C#????.NET Framework 4.5??。SimpleCalculatorProject: A simple calculator that adds two integers and displays the resultSJKP.PdfConversion: SharePoint 2010 Service Application framework, containing the infrastructure for easy OCR processing of PDF files in lists. A OCR component is not included.SoftwareTestingConcepts: Website gives information about Software Testing Concepts.SpeakToMe: SpeakToMe is a natural language processor that works by tokenizing the input based on known concepts and then matches the token structure against a set of rulesTododoo: This is my small hobby project - the simpliest todo-list possible.TokenUtil: TokenUtil is a command line program for requesting a token from a Security Token Service.VS2012 MSHA file builder: visual studio 2012 help view msha file creation

    Read the article

  • File Fix-it codegolf (GCJ 2010 1B-A)

    - by KirarinSnow
    Last year (2009), the Google Code Jam featured an interesting problem as the first problem in Round 1B: Decision Tree As the problem seemed tailored for Lisp-like languages, we spontaneously had an exciting codegolf here on SO, in which a few languages managed to solve the problem in fewer characters than any Lisp variety, using quite a number of different techniques. This year's Round 1B Problem A (File Fix-it) also seems tailored for a particular family of languages, Unix shell scripts. So continuing the "1B-A tradition" would be appropriate. :p But which language will end up with the shortest code? Let us codegolf and see! Problem description (adapted from official page): You are given T test cases. Each test case contains N lines that list the full path of all directories currently existing on your computer. For example: /home/awesome /home/awesome/wheeeeeee /home/awesome/wheeeeeee/codegolfrocks /home/thecakeisalie Next, you are given M lines that list the full path of directories you would like to create. They are in the same format as the previous examples. You can create a directory using the mkdir command, but you can only do so if the parent directory already exists. For example, to create the directories /pyonpyon/fumufumu/yeahyeah and /pyonpyon/fumufumu/yeahyeahyeah, you would need to use mkdir four times: mkdir /pyonpyon mkdir /pyonpyon/fumufumu mkdir /pyonpyon/fumufumu/yeahyeah mkdir /pyonpyon/fumufumu/yeahyeahyeah For each test case, return the number of times you have to call mkdir to create all the directories you would like to create. Input Input consists of a text file whose first line contains the integer T, the number of test cases. The rest of the file contains the test cases. Each test case begins with a line containing the integers N and M, separated by a space. The next N lines contain the path of each directory currently existing on your computer (not including the root directory /). This is a concatenation of one or more non-empty lowercase alphanumeric strings, each preceded by a single /. The following M lines contain the path of each directory you would like to create. Output For each case, print one line containing Case #X: Y, where X is the case number and Y is the solution. Limits 1 = T = 100. 0 = N = 100. 1 = M = 100. Each path contains at most 100 characters. Every path appears only once in the list of directories already on your computer, or in the list of desired directories. However, a path may appear on both lists, as in example case #3 below. If a directory is in the list of directories already on your computer, its parent directory will also be listed, with the exception of the root directory /. The input file is at most 100,000 bytes long. Example Larger sample test cases may be downloaded here. Input: 3 0 2 /home/sparkle/pyon /home/sparkle/cakes 1 3 /z /z/y /z/x /y/y 2 1 /moo /moo/wheeeee /moo Output: Case #1: 4 Case #2: 4 Case #3: 0 Code Golf Please post your shortest code in any language that solves this problem. Input and output may be handled via stdin and stdout or by other files of your choice. Please include a disclaimer if your code has the potential to modify or delete existing files when executed. Winner will be the shortest solution (by byte count) in a language with an implementation existing prior to the start of Round 1B 2010.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • DoubleBuffering in Java

    - by DDP
    Hello there, I'm having some trouble implementing DoubleBuffer into my program. Before you faint from the wall of text, you should know that a lot of it is there just in case you need to know. The actual place where I think I'm having problems is in one method. I've recently looked up a tutorial on the gpwiki about double buffering, and decided to try and implement the code they had into the code I have that I'm trying to implement doublebuffer in. I get the following error: "java.lang.IllegalStateException: Component must have a valid peer". I don't know if it makes any difference if you know it or not, but the following is the code with the main method. This is just a Frame that displays the ChronosDisplay class inside it. I omitted irrelevant code with "..." public class CDM extends JFrame { public CDM(String str) { super("CD:M - "+str); try { ... ChronosDisplay theGame = new ChronosDisplay(str); ((Component)theGame).setFocusable(true); add(theGame); } catch(Exception e) { System.out.println("CDM ERROR: " +e); } } public static void main( String args[] ) { CDM run = new CDM("DP_Mini"); } } Here is the code where I think the problem resides (I think the problem is in the paint() method). This class is displayed in the CDM class public class ChronosDisplay extends Canvas implements Runnable { String mapName; public ChronosDisplay (String str) { mapName = str; new Thread(this).start(); setVisible(true); createBufferStrategy(2); } public void paint( Graphics window ) { BufferStrategy b = getBufferStrategy(); Graphics g = null; window.setColor(Color.white); try { g = b.getDrawGraphics(); paintMap(g); paintUnits(g); paintBullets(g); } finally { g.dispose(); } b.show(); Toolkit.getDefaultToolkit().sync(); } public void paintMap( Graphics window ) { TowerMap m = new TowerMap(); try { m = new TowerMap(mapName); for(int x=0; x<m.getRows()*50; x+=50) { for(int y = 0; y<m.getCols()*50; y+=50) { int tileType = m.getLocation(x/50,y/50); Image img; if(tileType == 0) { Tile0 t = new Tile0(x,y); t.draw(window); } ...// More similar if statements for other integers } catch(Exception e) ... } ...// Additional methods not shown here public void run() { try { while(true) { Thread.currentThread().sleep(20); repaint(); } } catch(Exception e) ... } } If you're curious (I doubt it matters), the draw() method in the Tile0 class is: public void draw( Graphics window ) { window.drawImage(img,getX(),getY(),50,50,null); } Any pointers, tips, or solutions are greatly appreciated. Thanks for your time! :D

    Read the article

  • How to have struct members accessible in different ways

    - by Paul J. Lucas
    I want to have a structure token that has start/end pairs for position, sentence, and paragraph information. I also want the members to be accessible in two different ways: as a start/end pair and individually. Given: struct token { struct start_end { int start; int end; }; start_end pos; start_end sent; start_end para; typedef start_end token::*start_end_ptr; }; I can write a function, say distance(), that computes the distance between any of the three start/end pairs like: int distance( token const &i, token const &j, token::start_end_ptr mbr ) { return (j.*mbr).start - (i.*mbr).end; } and call it like: token i, j; int d = distance( i, j, &token::pos ); that will return the distance of the pos pair. But I can also pass &token::sent or &token::para and it does what I want. Hence, the function is flexible. However, now I also want to write a function, say max(), that computes the maximum value of all the pos.start or all the pos.end or all the sent.start, etc. If I add: typedef int token::start_end::*int_ptr; I can write the function like: int max( list<token> const &l, token::int_ptr p ) { int m = numeric_limits<int>::min(); for ( list<token>::const_iterator i = l.begin(); i != l.end(); ++i ) { int n = (*i).pos.*p; // NOT WHAT I WANT: It hard-codes 'pos' if ( n > m ) m = n; } return m; } and call it like: list<token> l; l.push_back( i ); l.push_back( j ); int m = max( l, &token::start_end::start ); However, as indicated in the comment above, I do not want to hard-code pos. I want the flexibility of accessible the start or end of any of pos, sent, or para that will be passed as a parameter to max(). I've tried several things to get this to work (tried using unions, anonymous unions, etc.) but I can't come up with a data structure that allows the flexibility both ways while having each value stored only once. Any ideas how to organize the token struct so I can have what I want? Attempt at clarification Given struct of pairs of integers, I want to be able to "slice" the data in two distinct ways: By passing a pointer-to-member of a particular start/end pair so that the called function operates on any pair without knowing which pair. The caller decides which pair. By passing a pointer-to-member of a particular int (i.e., only one int of any pair) so that the called function operates on any int without knowing either which int or which pair said int is from. The caller decides which int of which pair. Another example for the latter would be to sum, say, all para.end or all sent.start. Also, and importantly: for #2 above, I'd ideally like to pass only a single pointer-to-member to reduce the burden on the caller. Hence, me trying to figure something out using unions.

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • How to find the insertion point in an array using binary search?

    - by ????
    The basic idea of binary search in an array is simple, but it might return an "approximate" index if the search fails to find the exact item. (we might sometimes get back an index for which the value is larger or smaller than the searched value). For looking for the exact insertion point, it seems that after we got the approximate location, we might need to "scan" to left or right for the exact insertion location, so that, say, in Ruby, we can do arr.insert(exact_index, value) I have the following solution, but the handling for the part when begin_index >= end_index is a bit messy. I wonder if a more elegant solution can be used? (this solution doesn't care to scan for multiple matches if an exact match is found, so the index returned for an exact match may point to any index that correspond to the value... but I think if they are all integers, we can always search for a - 1 after we know an exact match is found, to find the left boundary, or search for a + 1 for the right boundary.) My solution: DEBUGGING = true def binary_search_helper(arr, a, begin_index, end_index) middle_index = (begin_index + end_index) / 2 puts "a = #{a}, arr[middle_index] = #{arr[middle_index]}, " + "begin_index = #{begin_index}, end_index = #{end_index}, " + "middle_index = #{middle_index}" if DEBUGGING if arr[middle_index] == a return middle_index elsif begin_index >= end_index index = [begin_index, end_index].min return index if a < arr[index] && index >= 0 #careful because -1 means end of array index = [begin_index, end_index].max return index if a < arr[index] && index >= 0 return index + 1 elsif a > arr[middle_index] return binary_search_helper(arr, a, middle_index + 1, end_index) else return binary_search_helper(arr, a, begin_index, middle_index - 1) end end # for [1,3,5,7,9], searching for 6 will return index for 7 for insertion # if exact match is found, then return that index def binary_search(arr, a) puts "\nSearching for #{a} in #{arr}" if DEBUGGING return 0 if arr.empty? result = binary_search_helper(arr, a, 0, arr.length - 1) puts "the result is #{result}, the index for value #{arr[result].inspect}" if DEBUGGING return result end arr = [1,3,5,7,9] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 6 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1,3,5,7,9,11] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = -60 arr.insert(binary_search(arr, b), b) p arr arr = [1] b = 60 arr.insert(binary_search(arr, b), b) p arr arr = [] b = 60 arr.insert(binary_search(arr, b), b) p arr and result: Searching for 6 in [1, 3, 5, 7, 9] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 6, arr[middle_index] = 5, begin_index = 3, end_index = 2, middle_index = 2 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9] Searching for 6 in [1, 3, 5, 7, 9, 11] a = 6, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 6, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 6, arr[middle_index] = 7, begin_index = 3, end_index = 3, middle_index = 3 the result is 3, the index for value 7 [1, 3, 5, 6, 7, 9, 11] Searching for 60 in [1, 3, 5, 7, 9] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = 60, arr[middle_index] = 7, begin_index = 3, end_index = 4, middle_index = 3 a = 60, arr[middle_index] = 9, begin_index = 4, end_index = 4, middle_index = 4 the result is 5, the index for value nil [1, 3, 5, 7, 9, 60] Searching for 60 in [1, 3, 5, 7, 9, 11] a = 60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = 60, arr[middle_index] = 9, begin_index = 3, end_index = 5, middle_index = 4 a = 60, arr[middle_index] = 11, begin_index = 5, end_index = 5, middle_index = 5 the result is 6, the index for value nil [1, 3, 5, 7, 9, 11, 60] Searching for -60 in [1, 3, 5, 7, 9] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 4, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 9, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9] Searching for -60 in [1, 3, 5, 7, 9, 11] a = -60, arr[middle_index] = 5, begin_index = 0, end_index = 5, middle_index = 2 a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 1, middle_index = 0 a = -60, arr[middle_index] = 11, begin_index = 0, end_index = -1, middle_index = -1 the result is 0, the index for value 1 [-60, 1, 3, 5, 7, 9, 11] Searching for -60 in [1] a = -60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 0, the index for value 1 [-60, 1] Searching for 60 in [1] a = 60, arr[middle_index] = 1, begin_index = 0, end_index = 0, middle_index = 0 the result is 1, the index for value nil [1, 60] Searching for 60 in [] [60]

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Passing Strings by Ref

    - by SGWellens
    Humbled yet again…DOH! No matter how much experience you acquire, no matter how smart you may be, no matter how hard you study, it is impossible to keep fully up to date on all the nuances of the technology we are exposed to. There will always be gaps in our knowledge: Little 'dead zones' of uncertainty. For me, this time, it was about passing string parameters to functions. I thought I knew this stuff cold. First, a little review... Value Types and Ref Integers and structs are value types (as opposed to reference types). When declared locally, their memory storage is on the stack; not on the heap. When passed to a function, the function gets a copy of the data and works on the copy. If a function needs to change a value type, you need to use the ref keyword.  Here's an example:     // ---- declaration -----------------     public struct MyStruct    {        public string StrTag;    }     // ---- functions -----------------------     void SetMyStruct(MyStruct myStruct)     // pass by value    {        myStruct.StrTag = "BBB";    }     void SetMyStruct(ref MyStruct myStruct)  // pass by ref    {        myStruct.StrTag = "CCC";    }     // ---- Usage -----------------------     protected void Button1_Click(object sender, EventArgs e)    {        MyStruct Data;        Data.StrTag = "AAA";         SetMyStruct(Data);        // Data.StrTag is still "AAA"         SetMyStruct(ref Data);        // Data.StrTag is now "CCC"    } No surprises here. All value types like ints, floats, datetimes, enums, structs, etc. work the same way. And now on to... Class Types and Ref     // ---- Declaration -----------------------------     public class MyClass    {        public string StrTag;    }     // ---- Functions ----------------------------     void SetMyClass(MyClass myClass)  // pass by 'value'    {        myClass.StrTag = "BBB";    }     void SetMyClass(ref MyClass myClass)   // pass by ref    {        myClass.StrTag = "CCC";    }     // ---- Usage ---------------------------------------     protected void Button2_Click(object sender, EventArgs e)    {        MyClass Data = new MyClass();        Data.StrTag = "AAA";         SetMyClass(Data);          // Data.StrTag is now "BBB"         SetMyClass(ref Data);        // Data.StrTag is now "CCC"    }  No surprises here either. Since Classes are reference types, you do not need the ref keyword to modify an object. What may seem a little strange is that with or without the ref keyword, the results are the same: The compiler knows what to do. So, why would you need to use the ref keyword when passing an object to a function? Because then you can change the reference itself…ie you can make it refer to a completely different object. Inside the function you can do: myClass = new MyClass() and the old object will be garbage collected and the new object will be returned to the caller. That ends the review. Now let's look at passing strings as parameters. The String Type and Ref Strings are reference types. So when you pass a String to a function, you do not need the ref keyword to change the string. Right? Wrong. Wrong, wrong, wrong. When I saw this, I was so surprised that I fell out of my chair. Getting up, I bumped my head on my desk (which really hurt). My bumping the desk caused a large speaker to fall off of a bookshelf and land squarely on my big toe. I was screaming in pain and hopping on one foot when I lost my balance and fell. I struck my head on the side of the desk (once again) and knocked myself out cold. When I woke up, I was in the hospital where due to a database error (thanks Oracle) the doctors had put casts on both my hands. I'm typing this ever so slowly with just my ton..tong ..tongu…tongue. But I digress. Okay, the only true part of that story is that I was a bit surprised. Here is what happens passing a String to a function.     // ---- Functions ----------------------------     void SetMyString(String myString)   // pass by 'value'    {        myString = "BBB";    }     void SetMyString(ref String myString)  // pass by ref    {        myString = "CCC";    }     // ---- Usage ---------------------------------     protected void Button3_Click(object sender, EventArgs e)    {        String MyString = "AAA";         SetMyString(MyString);        // MyString is still "AAA"  What!!!!         SetMyString(ref MyString);        // MyString is now "CCC"    } What the heck. We should not have to use the ref keyword when passing a String because Strings are reference types. Why didn't the string change? What is going on?   I spent hours unssuccessfully researching this anomaly until finally, I had a Eureka moment: This code: String MyString = "AAA"; Is semantically equivalent to this code (note this code doesn't actually compile): String MyString = new String(); MyString = "AAA"; Key Point: In the function, the copy of the reference is pointed to a new object and THAT object is modified. The original reference and what it points to is unchanged. You can simulate this behavior by modifying the class example code to look like this:      void SetMyClass(MyClass myClass)  // call by 'value'    {        //myClass.StrTag = "BBB";        myClass = new MyClass();        myClass.StrTag = "BBB";    } Now when you call the SetMyClass function without using ref, the parameter is unchanged...just like the string example.  I hope someone finds this useful. Steve Wellens

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Confusion on C++ Python extensions. Things like getting C++ values for python values.

    - by Matthew Mitchell
    I'm wanted to convert some of my python code to C++ for speed but it's not as easy as simply making a C++ function and making a few function calls. I have no idea how to get a C++ integer from a python integer object. I have an integer which is an attribute of an object that I want to use. I also have integers which are inside a list in the object which I need to use. I wanted to test making a C++ extension with this function: def setup_framebuffer(surface,flip=False): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, c_uint(int(surface.frame_buffer))) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix if flip: gluOrtho2D(0,surface._scale[0],surface._scale[1],0) else: gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) That function calls create_texture, so I will have to pass that function to the C++ function which I will do with the third argument. This is what I have so far, while trying to follow information on the python documentation: #include <Python.h> #include <GL/gl.h> static PyMethodDef SpamMethods[] = { ... {"setup_framebuffer", setup_framebuffer, METH_VARARGS,"Loads a texture from a Surface object to the OpenGL framebuffer."}, ... {NULL, NULL, 0, NULL} /* Sentinel */ }; static PyObject * setup_framebuffer(PyObject *self, PyObject *args){ bool flip; PyObject *create_texture, *arg_list,*pyflip,*frame_buffer_id; if (!PyArg_ParseTuple(args, "OOO", &surface,&pyflip,&create_texture)){ return NULL; } if (PyObject_IsTrue(pyflip) == 1){ flip = true; }else{ flip = false; } Py_XINCREF(create_texture); //Create texture if not done already if(texture == NULL){ arglist = Py_BuildValue("(O)", surface) result = PyEval_CallObject(create_texture, arglist); Py_DECREF(arglist); if (result == NULL){ return NULL; } Py_DECREF(result); } Py_XDECREF(create_texture); //Render child to parent frame_buffer_id = PyObject_GetAttr(surface, Py_BuildValue("s","frame_buffer")) if(surface.frame_buffer == NULL){ glGenFramebuffersEXT(1,frame_buffer_id); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer)); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,surface._scale[0],surface._scale[1]); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix if (flip){ gluOrtho2D(0,surface._scale[0],surface._scale[1],0); }else{ gluOrtho2D(0,surface._scale[0],0,surface._scale[1]); } Py_INCREF(Py_None); return Py_None; } PyMODINIT_FUNC initcscalelib(void){ PyObject *module; module = Py_InitModule("cscalelib", Methods); if (m == NULL){ return; } } int main(int argc, char *argv[]){ /* Pass argv[0] to the Python interpreter */ Py_SetProgramName(argv[0]); /* Initialize the Python interpreter. Required. */ Py_Initialize(); /* Add a static module */ initscalelib(); }

    Read the article

  • [C++] Adding a string or char array to a byte vector

    - by xeross
    I'm currently working on a class to create and read out packets send through the network, so far I have it working with 16bit and 8bit integers (Well unsigned but still). Now the problem is I've tried numerous ways of copying it over but somehow the _buffer got mangled, it segfaulted, or the result was wrong. I'd appreciate if someone could show me a working example. My current code can be seen below. Thanks, Xeross Main #include <iostream> #include <stdio.h> #include "Packet.h" using namespace std; int main(int argc, char** argv) { cout << "#################################" << endl; cout << "# Internal Use Only #" << endl; cout << "# Codename PACKETSTORM #" << endl; cout << "#################################" << endl; cout << endl; Packet packet = Packet(); packet.SetOpcode(0x1f4d); cout << "Current opcode is: " << packet.GetOpcode() << endl << endl; packet.add(uint8_t(5)) .add(uint16_t(4000)) .add(uint8_t(5)); for(uint8_t i=0; i<10;i++) printf("Byte %u = %x\n", i, packet._buffer[i]); printf("\nReading them out: \n1 = %u\n2 = %u\n3 = %u\n4 = %s", packet.readUint8(), packet.readUint16(), packet.readUint8()); return 0; } Packet.h #ifndef _PACKET_H_ #define _PACKET_H_ #include <iostream> #include <vector> #include <stdio.h> #include <stdint.h> #include <string.h> using namespace std; class Packet { public: Packet() : m_opcode(0), _buffer(0), _wpos(0), _rpos(0) {} Packet(uint16_t opcode) : m_opcode(opcode), _buffer(0), _wpos(0), _rpos(0) {} uint16_t GetOpcode() { return m_opcode; } void SetOpcode(uint16_t opcode) { m_opcode = opcode; } Packet& add(uint8_t value) { if(_buffer.size() < _wpos + 1) _buffer.resize(_wpos + 1); memcpy(&_buffer[_wpos], &value, 1); _wpos += 1; return *this; } Packet& add(uint16_t value) { if(_buffer.size() < _wpos + 2) _buffer.resize(_wpos + 2); memcpy(&_buffer[_wpos], &value, 2); _wpos += 2; return *this; } uint8_t readUint8() { uint8_t result = _buffer[_rpos]; _rpos += sizeof(uint8_t); return result; } uint16_t readUint16() { uint16_t result; memcpy(&result, &_buffer[_rpos], sizeof(uint16_t)); _rpos += sizeof(uint16_t); return result; } uint16_t m_opcode; std::vector<uint8_t> _buffer; protected: size_t _wpos; // Write position size_t _rpos; // Read position }; #endif // _PACKET_H_

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • C# : Console.Read() does not get the "right" input

    - by Daemonfire3002nd
    Hi there, I have the following code: The actual problem is the "non-quoted" code. I want to get the player amount (max = 4), but when I ask via Console.Read() and I enter any Int from 1 to 4 I get as value: 48 + Console.Read(). They only thing how I can get the "real" input is using Console.ReadLine(), but this does not give me an Integer, no it returns a string, and actually do not know how to convert String (Numbers) to Integers in C#, because I am new, and because I only found ToString() and not ToNumber. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace eve_calc_tool { class Program { int players; int units; int active_units; int inactive_units; int finished_units; int lastDiceNumber = 0; bool game_state; public static void Main(string[] args) { int count_game = 0; //Console.Title = "Mensch ärger dich nicht"; //Console.WriteLine("\tNeues Spiel wird"); //Console.WriteLine("\t...geladen"); //System.Threading.Thread.Sleep(5000); //Console.Clear(); //Console.WriteLine("Neues Spiel wird gestartet, bitte haben sie etwas Geduld"); //Console.Title = "Spiel " + count_game.ToString(); //Console.Clear(); //string prevText = "Anzahl der Spieler: "; //Console.WriteLine(prevText); string read = Console.ReadLine(); /*Program game = new Program(); game.players = read; game.setPlayers(game.players); if (game.players > 0 && 5 > game.players) { game.firstRound(); }*/ string readagain = read; Console.ReadLine(); } /* bool setPlayers(int amount) { players = amount; if (players > 0) { return true; } else { return false; } } bool createGame() { inactive_units = units = getPlayers() * 4; active_units = 0; finished_units = 0; game_state = true; if (game_state == true) { return true; } else { return false; } } int getPlayers() { return players; } private static readonly Random random = new Random(); private static readonly object syncLock = new object(); public static int RandomNumber(int min, int max) { lock (syncLock) { // synchronize return random.Next(min, max); } } int rollDice() { lastDiceNumber = RandomNumber(1,6); return lastDiceNumber; } int firstRound() { int[] results = new int[getPlayers()]; for (int i = 0; i < getPlayers(); i++) { results[i] = rollDice(); } Array.Sort(results); return results[3]; } */ } }

    Read the article

  • C++ class functions calling fortran subroutine

    - by user2863626
    Okay so I am trying to make my code work. It is a simple C++ program with a class "CArray". This class has 2 properties, the array size, and the value. I want the main C++ program to create two instances of the class CArray. In the class CArray, I have a function called "AddArray( CArray )" where it adds another array to the current array. The problem I am stuck with, is that I want the function "AddArray" to add the two arrays in fortran. I know, much more complicated, but that is what I need. I am having issues with linking the two inside the class code. #include <iostream> using namespace std; class CArray { public: CArray(); ~CArray(); int Size; int* Val; void SetSize( int ); void SetValues(); void GetArray(); extern "C" { void Add( int*, int*, int*, int*); void Subtract( int*, int*, int*, int*); void Muliply( int*, int*, int *, int* ); } void AddArray( CArray ); void SubtractArray( CArray ); void MultiplyArray( CArray ); }; Also here is the CArray function file. #include "Array.h" #include <iostream> using namespace std; CArray::CArray() { } CArray::~CArray() { } void CArray::SetSize( int s ) { Size = s; for ( int i=0; i<s; i++ ) { Val = new int[Size]; } } void CArray::SetValues() { for ( int i=0; i<Size; i++ ) { cout << "Element " << i+1 << ": "; cin >> Val[i]; } } void CArray::GetArray() { for ( int i=0; i<Size; i++ ) { cout << Val[i] << " "; } } void CArray::AddArray( CArray a ) { if ( Size == a.Size ) { Add(&Val, &a.Val); } else { cout << "Array dimensions do not agree!" << endl; } } void CArray::SubtractArray( CArray a ) { Subtract( &Val, &a, &Size, &a.Size); GetArray(); } Here is my Fortran code. module SubtractArrays use ico_c_binding implicit none contains subroutine Subtract(a,b,s1,s2) bind(c,name='Subtract') integer s1,s2 integer a(s1),b(s2) if ( s1.eq.s2 ) do i=1,s1 a(i) = a(i) - b(i) end return end end If someone could just help me with setting me up to send arrays of integers from C++ classes to fortran I would greatly appreciate it! Thank you, Josh Derrick

    Read the article

  • How does java.util.Collections.contains() perform faster than a linear search?

    - by The111
    I've been fooling around with a bunch of different ways of searching collections, collections of collections, etc. Doing lots of stupid little tests to verify my understanding. Here is one which boggles me (source code further below). In short, I am generating N random integers and adding them to a list. The list is NOT sorted. I then use Collections.contains() to look for a value in the list. I intentionally look for a value that I know won't be there, because I want to ensure that the entire list space is probed. I time this search. I then do another linear search manually, iterating through each element of the list and checking if it matches my target. I also time this search. On average, the second search takes 33% longer than the first one. By my logic, the first search must also be linear, because the list is unsorted. The only possibility I could think of (which I immediately discard) is that Java is making a sorted copy of my list just for the search, but (1) I did not authorize that usage of memory space and (2) I would think that would result in MUCH more significant time savings with such a large N. So if both searches are linear, they should both take the same amount of time. Somehow the Collections class has optimized this search, but I can't figure out how. So... what am I missing? import java.util.*; public class ListSearch { public static void main(String[] args) { int N = 10000000; // number of ints to add to the list int high = 100; // upper limit for random int generation List<Integer> ints; int target = -1; // target will not be found, forces search of entire list space long start; long end; ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 1)... "); if (ints.contains(target)) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); System.out.println(); ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 2)... "); for (Integer i : ints) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); } }

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 38 39 40 41 42 43  | Next Page >