Search Results

Search found 65160 results on 2607 pages for 'compile time constant'.

Page 402/2607 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • How to integrate the .gdf with a specific exe for Games Explorer

    - by Kraemer
    Hello, I want to create an installer for a game and after that an icon to be put in Games Explorer for Win Vista and Win 7. I have created the GDF (game definitions file), then build the script for project and obtained the .h, GDF and .rc files. But i can't compile using Visual Studio 2010 the .rc file into an executable to be used after that to create the installer. Some error is popping up after i set the executable path "Could not load file or assembly'Microsoft.VisualStudio.HpcDebugger.Impl, Version 10.0.0.0, Culture=neutral, PublickKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified." Any ideas what i'm doing wrong ? I need to mention that i've never worked before with GDF Editor and Visual Studio. Any answer would be highly appreciated.Thanks!

    Read the article

  • Minimize useless tweaking of a numeric app

    - by Potatoswatter
    I'm developing a numeric application (nonlinear optimizer), with a zillion knobs to tweak and rising. It's not my first foray into this domain, but this time there are even more variables in the code and I'm on a tight schedule. Don't want to waste time fiddling. Days or even months can potentially be wasted adjusting variables, recompiling, and reprocessing benchmark datasets. The resulting data is viewed and trouble spots are checked. The overall quality of the solution is reported by the program but the meaning of the report could change over time. (Numeric units for the report are one thing I'm trying to nail down.) One main problem is organizing result files to identify each with specific code changes. Note taking can be a pain, is there software to help with this? Are there agreed best practices to making this kind of development cycle reliably move forward? The solver package converges to its optimal solution with mechanical determination, but I'm all too familiar with the way an excess of design decisions can mire development.

    Read the article

  • Is it true that first versions of C compilers ran for dozens of minutes and required swapping floppy disks between stages?

    - by sharptooth
    Inspired by this question. I heard that some very very early versions of C compilers for personal computers (I guess it's around 1980) resided on two or three floppy disks and so in order to compile a program one had to first insert the disk with "first pass", run the "first pass", then change to the disk with "second pass", run that, then do the same for the "third pass". Each pass ran for dozens of minutes so the developer lost lots of time in case of even a typo. How realistic is that claim? What were actual figures and details?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best practice settings Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice settings effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply() is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply() is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply() is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of Set-operation like: private Matrix _world; public Matrix World { get{ return _world;} set { if(value == world)return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } Thanking you in anticipation

    Read the article

  • Building TrueCrypt on Ubuntu 13.10

    - by linuxubuntu
    With the whole NSA thing people tried to re-build identically looking binaries to the ones which truecrypt.org provides, but didn't succeed. So some think they might be compiled with back-doors which are not in the source code. - So how compile on the latest Ubuntu version (I'm using UbuntuGNOME but that shouldn't matter)? I tried some tutorials for previous Ubuntu versions but they seem not to work any-more? edit: https://madiba.encs.concordia.ca/~x_decarn/truecrypt-binaries-analysis/ Now you might think "ok, we don't need to build", but: To build he used closed-source software and there are proof-of-concepts where a compromised compiler still put backdoors into the binary: 1. source without backdoors 2. binary identically to the reference-binary 3. binary contains still backdoors

    Read the article

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to integrate game logic in game engines

    - by MahanGM
    Recently I'm working on a 2d game engine example in .Net with C#. My main problem is that I can't figure out how I should include the game logic within the game. Currently I have a base engine which is a set of classes that they are running sub-systems like Render, Sound, Input and Core functionality. There is an editor which helps the user to add resources, build levels, write scripts and other stuffs. I came up with an idea to use Reflection and CSharpCodeProvider from my editor to compile the written code. This way I can get an executable of my product too. This way is quite well but I would like to know what's really the solution and architecture to do this. My engine's role is 2d platform. The scripting language is C# right now because I can't consist any other embeddable language for now. The game needs compilation and CSharpCodeProvider is the only way for me to do it meantime.

    Read the article

  • Installing Disastry's PGP 2.6.3ia multi06 on 12.04 LTS

    - by user291787
    How can I install Disastry's version of PGP 2.6.3ia-multi06 on Ubuntu 12.04 LTS? His site with the source code is here: http://www.spywarewarrior.com/uiuc/disastry/263multi.htm He already compiled a unix version of pgp and it's in the Linux section of his download. How can I either copy and install the binary PGP file, or compile it from the source and install. I have tried several different ways, get no error messages, but when I type pgp -h at the command line, Ubuntu tells me that pgp is not installed. (I already have truecrypt 7.1a and gnupg 1.4.16 installed, but still like the old pgp I have on windows) Thanks! traveler

    Read the article

  • How should Code Review be Carried Out?

    - by Graviton
    My previous question has to do with how to advance code review among the developers. Here I am interested in how the code review session should be carried out, so that both the reviewer and reviewed are feeling comfortable about it. I have done some code review before, but the experience sucks big time. My previous manager would come to us-- on an ad hoc basis-- and tell us to explain our code to him. Since he wasn't very familiar with the code base, I spent a huge amount of times explaining just the most basic structure of my code to him. This took a long time and by the time we were done, we were both exhausted. Then he would raise issues with my code. Most issues he raised were cosmetic in nature ( e.g, don't use region for this code block, change the variable name from xxx to yyy even though the later makes even less sense, and so on). We did this a few rounds, and the review session didn't derive much benefits for us, and we stopped. What do you have to do, in order to make code review a natural, enjoyable, thought stimulating, bug-fixing and mutual-learning experience?

    Read the article

  • Where to find Hg/Git technical support?

    - by Rook
    Posting this as a kind of a favour for a former coleague, so I don't know the exact circumstances, but I'll try to provide as much info as I can ... A friend from my old place of employment (maritime research institute; half government/commercial funding) has asked me if I could find out who provides technical support (commercial) for two major DVCS's of today - Git and Mercurial. They have been using VCS for years now (Subversion while I was there, don't know what they're using now - probably the same), and now they're renewing their software licences (they have to give a plan some time in advance for everything ... then it goes "through the system") and although they will be keeping Subversion as well, they would like to justify beginning of DVCS as an alternative system (most people root for Mercurial since it seems simpler; mostly engineers and physicians there who are not that interested in checking Git repos for corruption and the finer workings of Git, but I believe any one of the two could "pass") - but it has to have a price (can be zero; no problem there) and some sort of official technical support. It is a pro forma matter, but it has to be specified. Most of the people there are using one of the two already, but this has to be specified to be official. So, I'm asking you - do you know where could one go for Git or Mercurial technical support (can be commercial)? Technical forums and the like are out of the question. It has to work on the principle: - I have a problem. - I post a question with the details. - I get an answer in specified time. It can be "we cannot do that." but it has to be an official answer and given in agreed time. I'm sure by now most of you understand what I'm asking, but if not - post a comment or similar. Also, if you think of any reasons which could decide justification of introducing Git/Hg from an technical and administrative viewpoint, feel free to write them down also.

    Read the article

  • Ubuntu 12.04 LTS , Dell d410 output to External Monitor/TV with docking

    - by Gck
    I am not able to see the video output on My external monitor/TV after installing 12.04 LTS. I had the same issue with 11 versions also. This issue does not happen on 10.04 LTS version. This is my setup. Dell latitude D410 with Intel Mobile 915 GM/910GML express controller using the Dell docking station Connecting to my 42inch TV with DVI (from Docking) to HDMI cable (on TV) If i close the lid, some times i am able to see the video output on TV, but the resolution is making the fonts so small, i cannot even see it and when i click displays option to change the resolution, the output goes off completely and it is shown now on the laptop screen. one time, i got the output on both the screens, but i am not able to replicate that scenario even after trying the Fn+F8 key option on my laptop. earlier some time back when i ran version 11.x ubuntu, i was able to get the output on both the screens (basically mirrored output) with large font resolution on my TV using the monitor Test tool as mentioned here :http://www.bigfatostrich.com/2011/05/solved-ubuntu-11-04-broken-external-display/ The problem with this approach is that i have to do it every time i restart the machine and more over the video output is present on both the screens (mirrored), which is of no use. As i stated before, this does not happen with 10.04 LTS. With 10.04 LTS my laptop screen is off automatically and the output is seen on TV/external display. Can anyone share any options that i can try? How can we achieve the behavior of 10.04 LTS with respect to the External display on 12.04 LTS . Thankyou very much for your help in advance.

    Read the article

  • Are there any resources on how to identify problems that could best be solved with templates?

    - by sap
    I decided to improve my knowledge in template meta-programming. I know the syntax and rules and been playing with counteless examples from online resources. I understand how powerful templates can be and how much compile time optimization they can provide but I still cant "think in templates", I can't seem to know by myself if a certain problem could be best solved with templates and if it can, how to adapt that problem to templates. Is there some kind of online resource or book that teaches how to identify problems that could best be solved with templates and how to adapt that problem?

    Read the article

  • Optimizing mathematics on arrays of floats in Ada 95 with GNAT

    - by mat_geek
    Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time. When running over 100 lots of 1 seconds worth of data the program takes 35 seconds (or 35%), executing this function in a loop. The test loop is timed specifically with Ada.RealTime. The data is pregenerated so the majority of the execution time is definatetly in this loop. How do I improce the code to get the processing time down to a minimum? The code will be running on an Intel Pentium-M which is a P3 with SSE2. package FF is new Ada.Numerics.Generic_Elementary_Functions(Float); N : constant Integer := 820; type A is array(1 .. N) of Float; type A3 is array(1 .. 3) of A; procedure F(state : in out A3; result : out A3; l : in A; r : in A) is s : Float; t : Float; begin for i in 1 .. N loop t := l(i) + r(i); t := t / 2.0; state(1)(i) := t; state(2)(i) := t * 0.25 + state(2)(i) * 0.75; state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0; for r in 1 .. 3 loop s := state(r)(i); t := FF."**"(s, 6.0) + 14.0; if t > MAX then t := MAX; elsif t < MIN then t := MIN; end if; result(r)(i) := FF.Log(t, 2.0); end loop; end loop; end; psuedocode for testing create two arrays of 80 random A3 arrays, called ls and rs; init the state and result A3 array record the realtime time now, called last for i in 1 .. 100 loop for j in 1 .. 80 loop F(state, result, ls(j), rs(j)); end loop; end loop; record the realtime time now, called curr output the duration between curr and last

    Read the article

  • Installing netcdf c++ interface on ubuntu 12.04.1 LTS

    - by iluvatar
    I am using a code which employs the modern netcdf c++ interface (netcdf namespace, include file is called just netcdf without .h or similar, ncFile class, etc) and have just switched to ubuntu 12.04.1 LTS. I installed netcdf and libnetcdf6 with apt-get, but I still get the "old" headers in /usr/local/include (netcdf.h, netcdfcpp.h, etc). In Ubuntu, the library version for netcdf is 4.1.1, while at my own computer with Mac Os X (where I have the right netcdf include file) the version is 4.2.1.1 . I cannot modify the source code I am using. I would like to know if there is a way to upgrade the netcdf library on ubuntu to support the modern c++ ointerface, or, if I have to manually compile it, if you think that using src2pkg is a good idea. This is my first experience with Ubuntu. Thanks in advance.

    Read the article

  • Scripting for a C#, multiplayer game

    - by Vaughan Hilts
    I have a multiplayer game written in C# and we've recently been creating a lot of content but have been looking for a way to give our entities customization logic that the designers can hook into. I took a look at this post. With something like this in mind (using C# as a scripting language); I have a few questions. 1) Would one embed the script itself in the entity object before persisting to it to the disk? Is this okay? 2) Would I compile once per scripting then - this seems like a lot of overhead to store all these compiled Assemblies to execute. Any general advice on how to do thigns is welcome, too. These entities are generated on the fly inside the editor and could be composed of a lot of different things.

    Read the article

  • How to build Gantt chart from a set of Redmine tickets without filling dates in all of them?

    - by Alexander Gladysh
    Redmine 1.1.1 I've created a set of tickets for a new project. In each issue I filled Subject, Description and Estimated time fields. I also filled blocks/blocked by dependencies in Related issues. But the Gantt chart for this project is empty (that is, it contains all the tasks, but does not contain any "bars" for them). I need to get a Gantt chart (or any other visual representation) to show to other project members. I'd hate to type all that information again into OpenProj. Is there a way to get a serviceable Gantt chart from the Redmine? Update: In the answers below I read that to get working Gantt chart I have to input start date and due date manually for each issue. I believe that this information should be inferred automatically from start date of first ticket (first — depenency-wise), estimated time of each ticket, dependency graph, resource assignment and working hours calendar. Just as it happens in any minimally sane Gantt chart project management tool. To enter this information by hand and to keep it up-to-date manually as the project evolves is insane waste of time. Is there a way to generate Gantt chart from the set of Redmine tickets without filling in all this information manually? (Solutions involving data export + import in sane tool or involving existing plugins are perfectly acceptable.)

    Read the article

  • Choosing between PHP and Java

    - by user996459
    I've recently started University, studying Computing and IT. My Uni focuses on Java. My study will consist of mathematics, 'boring' IT related stuff and several Java units such as: -Software development with Java, -Object-oriented Java programming, -Relational databases: theory and practice (using Java), -Developing concurrent distributed systems (using Java), -Software engineering with objects (using Java). I'm trying to determine whenever I should focus on Java and self study it in my free time so that I can actually learn and become a competent Java programmer by the time I graduate, or, only do enough Java to get the degree but in my free time self study PHP and related web technologies. Job market in my area appears to be balanced for the two, salary and availability wise. Regardless of which patch I'd take getting a job should not be a problem however Java does seem to pay almost insignificantly more. In terms of my interest and career expectations, I don't have anything specific planned. I very much enjoy writing code but I don't really care what kind. So far I equally enjoyed writing C, AutoIT, vb.net, PHP and even Java. Basically, I'm happy as long as I get to type in code (be it low level programming or web back-end scripting). So the question really is, would my Uni and their Java focus profit me should I choose PHP? Or should I buy what my university is selling and stick to Java like a fly sticks to poop...? Apologies for cryptic writing, still learning English

    Read the article

  • i don't understand how...

    - by Hristo
    how can something print 3 times when it only goes the printing code twice? I'm coding in C and the code is in a SIGCHLD signal handler I created. void chld_signalHandler() { int pidadf = (int) getpid(); printf("pidafdfaddf: %d\n", pidadf); while (1) { int termChildPID = waitpid(-1, NULL, WNOHANG); if (termChildPID == 0 || termChildPID == -1) { break; } dll_node_t *temp = head; while (temp != NULL) { printf("stuff\n"); if (temp-pid == termChildPID && temp-type == WORK) { printf("inside if\n"); // read memory mapped file b/w WORKER and MAIN // get statistics and write results to pipe char resultString[256]; // printing TIME int i; for (i = 0; i < 24; i++) { sprintf(resultString, "TIME; %d ; %d ; %d ; %s\n",i,1,2,temp->stats->mboxFileName); fwrite(resultString, strlen(resultString), 1, pipeFD); } remove_node(temp); break; } temp = temp-next; } printf("done printing from sigchld \n"); } return; } the output for my MAIN process is this: MAIN PROCESS 16214 created WORKER PROCESS 16220 for file class.sp10.cs241.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld MAIN PROCESS 16214 created WORKER PROCESS 16221 for file class.sp10.cs225.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld and the output for the MONITOR process is this: MONITOR: pipe is open for reading MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs241.mbox MONITOR: end of readpipe ( I've taken out repeating lines so I don't take up so much space ) Thanks, Hristo

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • bash-completion for xelatex

    - by andreas-h
    I'm on Ubuntu 12.04. When using bash as my shell, I can just type latex my_do <TAB> to compile a file my_document.tex, as *bash_completion* does the auto-complete. However, this auto-completion does not work for the xelatex executable. So I would like to add the same auto-complete functionality for xelatex as exists for latex. I could find that the latex auto-complete feature comes from the file /etc/bash_completion, where I could find a line complete -f -X '!*.@(?(la)tex|texi|dtx|ins|ltx)' tex latex slitex jadetex pdfjadetex pdftex pdflatex texi2dvi Now I could of course just add xelatex to that line and everything is fine. However, I'm wondering if I could instead put a file in /etc/bash_completion.d, as this would leave the system file untouched. Unfortunately, I'm at a complete loss about the syntax -- but maybe someone can help me here?

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • modifying openssl library code

    - by Nouar Ismail
    I am ordered to check the availability to customize an encryption algorithm the IPsec protocol use in Ubuntu, if anyone have any suggestion about this point?. I've read that the encryption operation occur in libcrypto in openssl. when I tried to compile and install OpenSSL from source ..I had everything ok with the installation, but when to check the version installed on the system, with "dpkg -s openssl", it didn't seem that it's the version i had already installed, maybe it had been installed successfully, but the question is: would it be the version the system use for encryption operations? would it overwrite the old version? and would my changes in code have effects ? any help please? thank you in advance.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • How can I make an MMORPG appeal to casual players?

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >