Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 538/811 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • What is the R Language?

    - by TATWORTH
    I encountered the R Language recently with O'Reilly books and while from the context I knew it was a language for dealing with statistics, doing a web search for the support web site was futile. However I have now located the web site and it is at http://www.r-project.org/R is a free language available for a number of platforms including windows. CRAN mirrors are available at a number of locations worldwide.Here is the official description:"R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS."

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Consultations with ATG Development at OpenWorld 2014

    - by Steven Chan (Oracle Development)
    Our OpenWorld 2014 San Francisco conference is about six weeks away.  We have a great lineup of sessions this year.  Our EBS Applications Technology track sessions are listed here, and we'll have a more-detailed article about those soon. One of the advantages of attending OpenWorld is that you can meet face-to-face with senior staff in ATG Development.  You can use these meetings to discusss your questions, requirements, plans, and deployment architectures with us. There are several options for doing this: At general sessions: collar the speaker of your choice after his or her presentation. At the Meet The Experts sessions:  these are first-come first-served round-table discussions Setting up private meetings via your Oracle account manager The last option is best if you have lots of in-depth questions or confidential details about your implementation that cannot be discussed in front of other customers.  Many of this blog's experts, including me, will be attending OpenWorld this year.  If you'd like to meet with us privately, please contact your Oracle account manager to arrange that as soon as possible.  My calendar, in particular, is already starting to fill up.  It is often completely full by the time OpenWorld starts. See you there!

    Read the article

  • Is Akka a good solution for a concurrent pipeline/workflow problem?

    - by herpylderp
    Disclaimer: I am brand new to Akka and the concept of Actors/Event-Driven Architectures in general. I have to implement a fairly complex problem where users can configure a "concurrent pipeline": Pipeline: consists of 1+ Stages; all Stages execute sequentially Stage: consists of 1+ Tasks; all Tasks execute in parallel Task: essentially a Java Runnable As you can see above, a Task is a Runnable that does some unit of work. Tasks are organized into Stages, which execute their Tasks in parallel. Stages are organized into the Pipeline, which executes its Stages sequentially. Hence if a user specifies the following Pipeline: CrossTheRoadSafelyPipeline Stage 1: Look Left Task 1: Turn your head to the left and look for cars Task 2: Listen for cars Stage 2: Look right Task 1: Turn your head to the right and look for cars Task 2: Listen for cars Then, Stage 1 will execute, and then Stage 2 will execute. However, while each Stage is executing, it's individual Tasks are executing in parallel/at the same time. In reality Pipelines will become very complicated, and with hundreds of Stages, dozens of Tasks per Stage (again, executing at the same time). To implement this Pipeline I can only think of several solutions: ESB/Apache Camel Guava Event Bus Java 5 Concurrency Actors/Akka Camel doesn't seem right because its core competency is integration not synchrony and orchestration across worker threads. Guava is great, but this doesn't really feel like a subscriber/publisher-type of problem. And Java 5 Concurrency (ExecutorService, etc.) just feels too low-level and painful. So I ask: is Akka a strong candidate for this type of problem? If so, how? If not, then why, and what is a good candidate?

    Read the article

  • Run a VirtualBox VM in a second X-Server with Graphic support

    - by Scindix
    I'm starting a VirtualBox VM (Windows 7) in a second X-Server (Ubuntu 14.04) and i'm using the following xinit script (/path/.vboximage): optirun VBoxManage startvm <vm name> & exec tinywm I recognized that while running Virtualbox normally under Gnome (Unity to be precise ;-) ) I get full graphics support. But when I run it on a second xserver there seem to be some problems. E.g. Windows Aero doesn't seem to work and Chrome WebGL demos are running with poorer performance. I'm not a big Windows expert, so I don't know how I could check the used Graphic card (specification). But it is very obvious that something has changed when running the vm in the extra X-Server. Also when I try to replace tinywm with compiz I get the unity frame around the VM, which also seems to have no graphics acceleration (no transparency effects) So it seems that the X-Server doesn't have Graphic acceleration at all. I have a NVidia 525m and an Intel HD3000 which are both capable of advanced graphics. I'm starting the above script with startx /path/.vboximage - :1 How could I fix that?

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • Should Android and iPhone UI be different?

    - by Phonon
    I'm not completely new to developing apps, but I'm at a point where I'm trying to develop something and deploy it on several mobile platforms. To only concentrate on two major ones, suppose I'm developing an app for Android and iPhone and designing UI and the general user interaction architecture. Both platforms give guidelines as to how their UIs should work. For example, most iPhone apps have the Navigation Bar (the one that says Testing 1 and has a Back button) and an Icon Bar for navigating a program, while Android uses an Options Menu fetched via a Menu button and the "back" navigation is handled with the physical Back button on the device. I've seen many apps that try to force the same UI on every platform. For example, custom-building an iPhone style Icon Bar and putting it in their Android apps, but it just doesn't quite look right to me and it feels like it violates UI design guidelines somewhat. Are there any good design patters for implementing something sufficiently similar on both platforms, yet still platform-specific enough so that the user would not feel out of their comfort zone? What do people usually do in these situations?

    Read the article

  • Software for a online collaborative bi/tri lingual dictionary [closed]

    - by user537488
    I am looking for a software which I can host in popular and general shared web hosting services(online softwares like wordpress, meidawiki, drupal etc.) which can do the following- allow users to create account allow users or anons to add words to the dictionary (there will be English as base language and other languages) easier way to import all the words from English dictionary users should be able to write the that language equivalent of the English word Every word should have it's own address and page like www.namesomething.com/word/en/software will contain the word software and the other language word for it search should be faster and should find nearer results it's should be able to list related words like if the user is looking at "software" then other words from s like "softcopy" etc should appear alphabetically in that page Any one should be able to comment on the word which is not seen in the main page but other page similar to the talk page in the wiki any one should be able to contribute clean interface unlike wiki (media wiki and all other) just for words only I tried media wiki and other wiki software but it overloaded and unclean. I am looking for interface similar to oed.com but clean, minimal as we are not going to have such more information. Just words in English and it's other language equivalent. Here we are talking about a language which has not yet been in the Internet. It's should be collaborative.

    Read the article

  • Oracle E-Business Products New Search Helpers for Guided Resolution of Customer Issues

    - by user793044
    Oracle E-Business Proactive Support has created many new guided resolution documents that you may find helpful in resolving issues in your EBS applications.  These new documents are called “Search Helpers” and they guide you through your issue to a solution.  They are meant to be an easy and fast method to finding a relevant, complete solution. Hundreds of notes and service requests were reviewed and the best solutions to these known issues were selected.  For some issues, notes were updated to better clarify the solution.  In other cases, if a note with a solution did not already exist, one was created. You start the process by selecting the scenario you have encountered.  You may have received an error message, or there may be a particular area of the application in which you have encountered an issue.  Based on your selection of the issue, the Search Helper will present one or more additional possible symptoms.  When you have selected from both of these two sections, you are then presented with one or more articles known to have fully solved this issue in the past.  Several EBS products have produced Search Helpers documents.  Take a look at Doc ID 1501724.1 for an index of the current EBS Search Helpers.  Here is an example of a Search Helper from the Receivables Transactions area: After selecting the Functional Area of "Entering / Updating Transactions" a list of Known Symptoms is presented: And, when "Transaction numbers are not in sequence" is selected, a solution link is provided for Document ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. The EBS applications that currently have published Search Helpers are: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • MySQL documentation writer for MEM and Replication wanted!

    - by stefanhinz
    As MySQL is thriving and growing, we're looking for an experienced technical writer located in the UK or Ireland to join the MySQL documentation team. For this job, we need the best and most dedicated people around. You will be part of a geographically distributed documentation team responsible for the technical documentation of all MySQL products. Team members are expected to work independently, requiring discipline and excellent time-management skills as well as the technical facilities and experience to communicate across the Internet. Candidates should be prepared to work intensively with our engineers and support personnel. The overall team is highly distributed across different geographies and time zones. Our source format is DocBook XML. We're not just writing documentation, but also handling publication. This means you should be familiar with DocBook, and willing to learn our publication infrastructure. Your areas of responsibility would initially be MySQL Enterprise Monitor, and MySQL Replication. This means you should be familiar with MySQL in general, and preferably also with the MySQL Enterprise offerings. A MySQL certification will be considered an advantage. Other qualifications you should have: Native English speaker 5 or more years previous experience in writing software documentation Familiarity with distributed working environments and versioning systems such as SVN Comfortable with working on multiple operating systems, particularly Windows, Mac OS X, and Linux Ability to administer own workstations and test environment Excellent written and oral communication skills Ability to provide (online) samples of your work, e.g. books or articles If you're interested, contact me under [email protected]. For reference, the job offer can be viewed here.

    Read the article

  • Why won't my graphics work in Ubuntu 12.04 LTS?

    - by user170974
    I'm very new to Ubuntu and to Linux in general, and took the leap and formatted my PC to Ubuntu 12.04 LTS very recently :) I seem to be having some trouble getting my graphics card to run properly, I looked over what information I could find but I still cannot get it up and running and figured this was a good place to ask for help. The information I can find on my graphics is as follows: (Terminal command) lspci outputs: 01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5650/5750 / 6530M/6550M] I tried using a mixture of the following links: How do I fix my installation of ATI Catalyst Video Driver in 12.04 LTS? What is the correct way to install ATI Catalyst Video Drivers (fglrx)? Ubuntu Precise Installation Guide But it does not seem to work, since running fglrxinfo in terminal gives: display: :0.0 screen: 0 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x301) OpenGL version string: 1.4 (2.1 Mesa 9.0.3) What am I doing wrong here? All help appreciated ;) Edit: I have tried the legacy driver from www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13-4-linux-x86.x86_64.zip I also tried the guide at https://launchpad.net/~makson96/+archive/fglrx which caused the system to crash (blackscreen, no boot) Neither seemed to work. I did however reinstall ubuntu 12.04 LTS, and re-tried both with no success. Reintalling ubuntu did however fix the broken dependencies problems, etc.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Blogger Blog Takes Ages to Load after Custom Domain Redirection

    - by abhisek
    I recently bought a custom domain for a blogger blog (technabled.com) I have for sometime now. I followed the instructions on blogger's documentation. I added A-name records and CNAME records with my DNS provider. But, now, some strange problems are cropping up. If I connect to my broadband network and then ping technabled.com, it times out. Then, if I visit the webpage, which takes almost one and half minutes to load, and then if I ping technabled.com, it shows expected result. This is not just me. I asked some of the regular readers, who reported the same issue. As a result of this, I am losing a lot of visits. What is stranger is that the subsequent visits to the blog is faster. I have checked with a few online services to test the performance. WebPageTest seems to say the same thing: http://www.webpagetest.org/result/110117_1N_7PE/ (please see the First View / Repeat View time) Also, the pagespeed score is not that bad. So I am ruling out other possibilities. I am at a loss as to what I should do to find a solution. Help is much appreciated. :)

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query :)

    Read the article

  • Is there a NVIDIA driver (for a 7-series card) that will actually work for 12.10?

    - by DS13
    I see many similar topics on this, but I've tried all their suggestions, and nothing has worked. ISSUE: I do a clean install of Ubuntu 12.10. Boots fine with the “nouveau” graphics driver – graphics are very slow and choppy. The three other driver options in Ubuntu (official NVIDIA drivers), all result in a variation of the black screen on boot up. There will be NO access to a command line/GUI in anyway what-so-ever (tried every option recommended out there, but the system is unusable at this stage). I can only reinstall, and try different drivers…and I only ever get one shot at it. QUESTIONS: Does anyone know of a NVIDIA driver that will actually work with a Nvidia GeForce 7350LE? Or a 7-series card in general? This is my second computer, and I’m just trying to get a working install of Ubuntu on it. I don’t want to put much money into it, as I have seen Ubuntu run great on much older/less capable machines. I’ve got a decent Intel processor (2.3Ghz), 2GB of RAM, 320GB hard drive, 32-bit architecture, and there is no other O/S installed. It appears as if the graphics card is holding me back. Should I just buy a cheap graphics card (non-NVIDIA) to put in as a replacement? TRIED SO FAR: -all drivers available in Ubuntu *all fail -manual install of some different NVIDIA drivers *all fail -also tried installing the generic kernel, Nvidia driver doesn't work in 12.10 *no difference -every method suggested to at least get a command line after switching to a NVIDIA driver *all fail

    Read the article

  • Hobbyist transitioning to earn money on paid work?

    - by Chelonian
    I got into hobbyist Python programming some years ago on a whim, having never programmed before other than BASIC way back when, and little by little have cobbled together a, in my opinion, nice little desktop application that I might try to get out there in some fashion someday. It's roughly 15,000 logical lines of code, and includes use of Python, wxPython, SQLite, and a number of other libraries, works on Win and Linux (maybe Mac, untested) and I've gotten some good feedback about the application's virtues from non-programmer friends. I've also done a small application for data collection for animal behavior experiments, and an ad hoc tool to help generate a web page...and I've authored some tutorials. I consider my Python skills to be appreciably limited, my SQL skills to be very limited, but I'm not totally out to sea, either (e.g. I did FizzBuzz in a few minutes, did a "Monty Hall Dilemma" simulator in some minutes, etc.). I also put a strong premium on quality user experience; that is, the look and feel matters much to me and the software looks quite good, I feel. I know no other programming languages yet. I also know the basics of HTML/CSS (not considering them programming languages) and have created an artist's web page (that was described by a friend as "incredibly slick"...it's really not, though), and have a scientific background. I'm curious: Aside from directly selling my software, what's roughly possible--if anything--in terms of earning either side money on gigs, or actually getting hired at some level in the software industry, for someone with this general skill set?

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • Randomly and uniquely iterating over a range

    - by Synetech
    Say you have a range of values (or anything else) and you want to iterate over the range and stop at some indeterminate point. Because the stopping value could be anywhere in the range, iterating sequentially is no good because it causes the early values to be accessed more often than later values (which is bad for things that wear out), and also because it reduces performance since it must traverse extra values. Randomly iterating is better because it will (on average) increase the hit-rate so that fewer values have to be accessed before finding the right one, and also distribute the accesses more evenly (again, on average). The problem is that the standard method of randomly jumping around will result in values being accessed multiple times, and has no automatic way of determining when each value has been checked and thus the whole range has been exhausted. One simplified and contrived solution could be to make a list of each value, pick one at random, then remove it. Each time through the loop, you pick one fromt he set of remaining items. Unfortunately this only works for small lists. As a (forced) example, say you are creating a game where the program tries to guess what number you picked and shows how many guess it took. The range is between 0-255 and instead of asking Is it 0? Is it 1? Is it 2?…, you have it guess randomly. You could create a list of 255 numbers, pick randomly and remove it. But what if the range was between 0-232? You can’t really create a 4-billion item list. I’ve seen a couple of implementations RNGs that are supposed to provide a uniform distribution, but none that area also supposed to be unique, i.e., no repeated values. So is there a practical way to randomly, and uniquely iterate over a range?

    Read the article

  • Have you Been Missing the 'About This Record' Functionality on the Customer Form...?

    - by MargaretW
    Do you have fond memories of the 'Help -> About This Record'  functionality that used to be available in the old Customer form - when it was a form, and not a java html screen?  Back in Release 11i, we had the ability to identify when the customer record had last been updated and by whom.  When some forms were replaced by Java HTML screens, you could identify some of this information via the 'About this Page' hyperlink at the bottom left hand corner of the HTML page.  You could enable this by enabling the FND: Diagnostics profile option, but many customers found this had an adverse effect on performance and additionally was not user-friendly.   Our customers tell us that this feature was widely used to identify owner/update information in many business processes, including auditing, customer entry/update, research and testing.  There have been various efforts to revert this feature by customising java pages, but this was not fully successful in some cases.  Oracle Support is happy to announce that this functionality has now been included in the Customer screens in Release 12.2 onwards.   You will be able to query the record history at customer level, at site level, at site address levels and for all tabs relating to the customer. Simply click on the 'Record History' icon, available in the Record History column on a summary screen, or via the same icon on the individual detail screen to display the following information: Last Updated Date: Last Updated By Creation Date Created By Last Update Login

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • Mechanics of reasoning during programming interviews

    - by user129506
    This is not the usual "I don't want to write code during an interview", in this question the assumption is that I need to write code during an interview (think about the level of rewriting the quicksort or mergesort from scratch) I know how the algorithm work or I have a basic idea of how I should start working from there, i.e. I don't remember the algorithm by heart I noticed that even on a whiteboard, I always end up writing bugged code or code that doesn't compile. If there's a typo, whatever I usually live with that.. but when there's a crash due to some uncaught particular case I end up losing confidence in my skills. I realize that perhaps interviewers might want to look at how I write code and/or how I solve problems rather than proof-compiling my whiteboard code, but I'd like to ask how should I approach the above problem in mental terms, i.e. what mental steps should I follow when writing code for an interview with the two bullet points above. There must be a unique and agreed series of steps I should follow to avoid getting stuck/caught into particular exception cases (limit cases) that might end up wasting my time and my energies rather than focusing on the overall algorithm for the general case. I hope I made my point clear

    Read the article

  • Pure functional programming and game state

    - by Fu86
    Is there a common technique to handle state (in general) in a functional programming language? There are solutions in every (functional) programming language to handle global state, but I want to avoid this as far as I could. All state in a pure functional manner are function parameters. So I need to put the whole game state (a gigantic hashmap with the world, players, positions, score, assets, enemies, ...)) as a parameter to all functions which wants to manipulate the world on a given input or trigger. The function itself picks the relevant information from the gamestate blob, do something with it, manipulate the gamestate and return the gamestate. But this looks like a poor mans solution for the problem. If I put the whole gamestate into all functions, there is no benefit for me in contrast to global variables or the imperative approach. I could put just the relevant information into the functions and return the actions which will be taken for the given input. And one single function apply all the actions to the gamestate. But most functions need a lot of "relevant" information. move() need the object position, the velocity, the map for collision, position of all enemys, current health, ... So this approach does not seem to work either. So my question is how do I handle the massive amount of state in a functional programming language -- especially for game development?

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >