Search Results

Search found 51569 results on 2063 pages for 'version number'.

Page 20/2063 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • is there a such thing as a randomly accessible pseudo-random number generator? (preferably open-sour

    - by lucid
    first off, is there a such thing as a random access random number generator, where you could not only sequentially generate random numbers as we're all used to, assuming rand100() always generates a value from 0-100: for (int i=0;i<5;i++) print rand100() output: 14 75 36 22 67 but also randomly access any random value like: rand100(0) would output 14 as long as you didn't change the seed rand100(3) would always output 22 rand100(4) would always output 67 and so on... I've actually found an open-source generator algorithm that does this, but you cannot change the seed. I know that pseudorandomness is a complex field; I wouldn't know how to alter it to add that functionality. Is there a seedable random access random number generator, preferably open source? or is there a better term for this I can google for more information? if not, part 2 of my question would be, is there any reliably random open source conventional seedable pseudorandom number generator so I could port it to multiple platforms/languages while retaining a consistent sequence of values for each platform for any given seed?

    Read the article

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • How to make a random number generator in matlab that is based on percentages?

    - by Ben Fossen
    I am currently using the built in random number generator. for example nAsp = randi([512, 768],[1,1]); 512 is the lower bound and 768 is the upper bound, the random number generator chooses a number from between these two values. What I want is to have two ranges for nAsp but I want one of them to get called 25% of the time and the other 75% of the time. Then gets plugged into he equation. Does anyone have any ideas how to do this or if there is a built in function in matlab already? for example nAsp = randi([512, 768],[1,1]); gets called 25% of the time nAsp = randi([690, 720],[1,1]); gets called 75% of the time

    Read the article

  • Millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • New Version: ZFS RAID Calculator v7

    - by uwes
    New version available now. ZFS RAID Calculator v7 on eSTEP portal. The Tool calculates key capacity parameter like  number of Vdev's, number of spares, number of data drives, raw RAID capacity(TB), usable capacity (TiB) and (TB) according the different possible  RAID types for a given ZS3 configuration. Updates included in v7: added an open office version compatible with MacOS included the obsolete drives as options for upgrade calculations simplified the color scheme and tweaked the formulas for better compatibility The spreadsheet can be downloaded from eSTEP portal. URL: http://launch.oracle.com/ PIN: eSTEP_2011 The material can be found under tab eSTEP Download.

    Read the article

  • What "version naming convention" do you use?

    - by rjstelling
    Are different version naming conventions suited to different projects? What do you use and why? Personally, I prefer a build number in hexadecimal (e.g 11BCF), this should be incremented very regularly. And then for customers a simple 3 digit version number, i.e. 1.1.3. 1.2.3 (11BCF) <- Build number, should correspond with a revision in source control ^ ^ ^ | | | | | +--- Minor bugs, spelling mistakes, etc. | +----- Minor features, major bug fixes, etc. +------- Major version, UX changes, file format changes, etc.

    Read the article

  • Possibility Program for number of pieces

    - by Brad
    I would like to put a program together to calculate the number of 60' pieces would be needed from a list of shorter pieces. For example, I sell rebar cut to length from our standard length of 60'-0". Now the length the customer requires are as follows: 343 pc @ 12.5' 35 pc @ 13' 10 pc @ 15' 63 pc @ 15.5'....... There are 56 total lengths ranging from 12.5' to 30.58' The idea is to limit the amount of waste from the 60' piece. The input from the user would be: number of differnt lengths Length of piece to cut from count of different lengths The result would be the number of prime pieces needed to fulfill the order. What well-known algorithms exist that could help me solve this problem?

    Read the article

  • Trying to find a duplicate version of PHP on my system. Where is it?

    - by macek
    I have having a helluva time trying to track down which php binary my apache is using. locate bin/php returns this list /usr/bin/php /usr/bin/php-cgi /usr/bin/php-config /usr/bin/phpize /usr/local/bin/php /usr/local/bin/php-cgi /usr/local/bin/php-config /usr/local/bin/php-shell.sh /usr/local/bin/phpize Let's see the versions: /usr/bin/php -v shows 5.3.2 /usr/bin/local/php -v shows 5.3.2 What about which? [macek ~]$ which php /usr/bin/php The problem phpinfo(); when executed by apache shows 5.2.11 Where is this phantom 5.2.11 on my system?

    Read the article

  • What is a good set and forget file version tracking / backup application for windows?

    - by tomwoods
    When I make changes to files, I keep on finding myself "saving as" and adding the current date to the file. It slows me down, and it creates a bunch of files that clot my folder. I would prefer to be able to Right Click on a file from the File Explorer and select to save different versions of this file, so that each time I save it, it saves a copy somewhere, that I can access in the future if necessary. Is there any application that achieves this?

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • What is an effective git process for managing our central code library?

    - by Mathew Byrne
    Quick background: we're a small web agency (3-6 developers at any one time) developing small to medium sized Symfony 1.4 sites. We've used git for a year now, but most of our developers have preferred Subversion and aren't used to a distributed model. For the past 6 months we've put a lot of development time into a central Symfony plugin that powers our custom CMS. This plugin includes a number of features, helpers, base classes etc. that we use to build custom functionality. This plugin is stored in git, but branches wildly as the plugin is used in various products and is pulled from/pushed to constantly. The repository is usually used as a submodule within a major project. The problems we're starting to see now are a large number of Merge conflicts and backwards incompatible changes brought into the repository by developers adding custom functionality in the context of their own project. I've read Vincent Driessen's excellent git branching model and successfully used it for projects in the past, but it doesn't seem to quite apply well to our particular situation; we have a number of projects concurrently using the same core plugin while developing new features for it. What we need is a strategy that provides the following: A methodology for developing major features within the code repository. A way of migrating those features into other projects. A way of versioning the core repository, and of tracking which version each major project uses. A plan for migrating bug fixes back to older versions. A cleaner history that's easier to see where changes have come from. Any suggestions or discussion would be greatly appreciated.

    Read the article

  • How do you achieve a numeric versioning scheme with Git?

    - by Erlend
    My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the svn revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from hudson (Problem: you have to check hudson to correlate that to an actual git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how.

    Read the article

  • How should I set up UDK with Git and CruiseControl?

    - by Martin Sojka
    For a new project in UDK, I'd like to set up a Git repository for version control and a CruiseControl.NET-based continuous integration solution. The good news is that he first part seems easy enough and CruiseControl.NET can work off Git repositories. The bad news is that according to my searches, nobody has ever tried to do this. Ideally, I'm looking for a step-by-step guide on how to set up such a development environment assuming more than one development computer, one central repository for the "master" branch, and one machine for building and packaging the binaries via CruiseControl.NET. Related: Version control system for game development with UDK? Options for UDK and version control repositories? CruiseControl.NET and Git

    Read the article

  • How does a cryptographically secure random number generator work?

    - by Byron Whitlock
    I understand how standard random number generators work. But when working with crytpography, the random numbers really have to be random. I know there are instruments that read cosmic white noise to help generate secure hashes, but your standard PC doesn't have this. How does a cryptographically secure random number generator get its values with no repeatable patterns?

    Read the article

  • How to best implement Version Control for Web Development?

    - by Adam Taylor
    Version control systems are obviously important in development projects but there use in web development projects appears to be more complex, what with the requirement of having a web server to run all but the simplest of web applications. With that in mind, I have looked around and discovered a few different methods of using version control in web development projects: Provide each developer with a virtual machine which is a replication of the development server and have the developer run their working copy of the application in the virtual machine. Have each developer use a sub domain on the development server, e.g. john.project.com and checkout their working copy of the app to the directories the sub domain points to. Use the version control system to checkout code, make a change, commit the code and then check it on the development server (which points to the head of the repository). I can see a drawback of 1 being the added time required to create the virtual machines and ensure that the virtual machines are kept insync with the development server (also the need(?) to continuously change the developers host file to point at the virtual machine not the development server). I can see 2 possibly being a problem if absolute URLs are used within the site unless there is an easy way to update the configuration to use the new subdomains as well. 3 is the easiest to set up but is rather primitive and it will presumably become quite tedious for a developer to keep checking in the code after every time change. How have the users of stackoverflow used version control with web development projects and which method/workflow was most effective. Please also include extra methods I haven't thought of / read about.

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • How to detect browser type and version from ADF Faces

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Sometimes ADF applications need to know about the user browser type and version. For this, assuming you need this information in Java, you can use the Trinidad RequestContext object. You could also use the AdfFacesContext object for the same, but since the ADF Faces Agent class is marked as deprecated, using the equivalent Trinidad classes is the better choice. The source code below prints the user browser information to the Oracle JDeveloper message window import org.apache.myfaces.trinidad.context.Agent; import org.apache.myfaces.trinidad.context.RequestContext; … RequestContext requestCtx = RequestContext.getCurrentInstance(); Agent agent = requestCtx.getAgent(); String version = agent.getAgentVersion(); String browser = agent.getAgentName(); String platform = agent.getPlatformName(); String platformVersion = agent.getPlatformVersion(); System.out.println("=================="); System.out.println("Your browser information: "); System.out.println("Browser: "+browser); System.out.println("Browser Version : "+version); System.out.println("Browser Platform: "+platform); System.out.println("Browser Platform Version: "+platformVersion); System.out.println("==================");

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >