Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 104/677 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Parallelism implies concurrency but not the other way round right?

    - by Cedric Martin
    I often read that parallelism and concurrency are different things. Very often the answerers/commenters go as far as writing that they're two entirely different things. Yet in my view they're related but I'd like some clarification on that. For example if I'm on a multi-core CPU and manage to divide the computation into x smaller computation (say using fork/join) each running in its own thread, I'll have a program that is both doing parallel computation (because supposedly at any point in time several threads are going to run on several cores) and being concurrent right? While if I'm simply using, say, Java and dealing with UI events and repaints on the Event Dispatch Thread plus running the only thread I created myself, I'll have a program that is concurrent (EDT + GC thread + my main thread etc.) but not parallel. I'd like to know if I'm getting this right and if parallelism (on a "single but multi-cores" system) always implies concurrency or not? Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?

    Read the article

  • How to use Google Analytics to track a development and production versions of the same site on different servers?

    - by Abe
    I have a website with two versions, one for production and one for development (testing new features). All of the code is under version control and the websites are on separate servers. Currently, I have the same Google Analytics Tracking code used on both sites. Since the code is under version control, it would be ideal to either have an if I am on production, use this code; else if on development server use that code clause. But I suspect that Google makes it easier to do something like this. I see that there are many ways to configure a GA tracking code, e.g. "a single domain" vs. "multiple top level domains". But it is not clear to me how to set this up. Also, if tracking code configured for a single domain has been on the development server, have I been picking up traffic to both sites, or does GA just ignore the second domain that I haven't registered?

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • Gerrit, git and reviewing whole branch

    - by liori
    I'm now learning Gerrit (which is the first code review tool I use). Gerrit requires a reviewed change to consist of a single commit. My feature branch has about 10 commits. The gerrit-prefered way is to squash those 10 commits into a single one. However this way if the commit will be merged into the target branch, the internal history of that feature branch will be lost. For example, I won't be able to use git-bisect to bisect into those commits. Am I right? I am a little bit worried about this state of things. What is the rationale for this choice? Is there any way of doing this in Gerrit without losing history?

    Read the article

  • Max Degree of Parallelism Server-Side Setting

    - by Tara Kizer
    Recently I opened a case with Microsoft PSS to help us through a severe performance problem on a new system.  As part of that case, the PSS engineer checked our “max degree of parallelism” server-side setting.  It is our standard to use 4 on our production systems that have 16 CPUs (2 sockets, quad-core, hyper-threaded).  The PSS engineer had me run the below query to get Microsoft’s recommended value of “max degree of parallelism” server-side setting for our 16-CPU system: select case when cpu_count / hyperthread_ratio > 8 then 8 else cpu_count / hyperthread_ratio end as optimal_maxdop_setting from sys.dm_os_sys_info; The query returned 2.  I made the change using sp_configure, and it did not resolve our issue.  We have decided to leave it in place for now.   Do you agree with this query?  What are your thoughts on this? If you decide to change your setting to reflect the output of this query, please test it first to ensure there are no negative side effects.

    Read the article

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • What is the *correct* term for a program that makes use of multiple hardware processor cores?

    - by Ryan Thompson
    I want to say that my program is capable of splitting some work across multiple CPU cores on a single system. What is the simple term for this? It's not multi-threaded, because that doesn't automatically imply that the threads run in parallel. It's not multi-process, because multiprocessing seems to be a property of a computer system, not a program. "capable of parallel operation" seems too wordy, and with all the confusion of terminology, I'm not even sure if it's accurate. So is there a simple term for this?

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • Xorg.conf (nvidia) Second Monitor getting settings of first

    - by HennyH
    I've been spending the weekend (and some time before that) trying to set up my Korean QHD270 and Benq G2222HDL monitors with Ubuntu 13.10. With the nouveau drivers install both monitor function perfectly fine. After installing the nvidia drivers the Benq works but the QHD270 does not. Now, after days of struggling I managed to get the QHD270 to work following a mixture of blogs, particularly; this one and learnitwithme. Now, unfortunatly my G2222HDL does not work. I fixed the QHD270 by supplying a custom EDID, my xorg.conf looks like so (excluding keyboard and mouse): Section "ServerLayout" Identifier "Layout0" Screen "Default Screen" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Device" Identifier "Configured Video Device" Driver "nvidia" Option "CustomEDID" "DFP:/etc/X11/edid-shimian.bin" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Now, I tried defining a new Device,Monitor and Screen then in ServerLayout adding Screen "Second Screen" RightOf "Default Screen", but after doing so neither monitor worked. Hoping to fix the issue using a GUI based tool I opened up NVIDIA X Server Settings, which shows my current layout as: It seems that something is being output to the monitor, as suggested by my print screen: Any help would be greatly appreciated. Output of xrandr: Screen 0: minimum 8 x 8, current 5120 x 1440, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ DP-1 disconnected (normal left inverted right x axis y axis) And an extract from my log file (perhaps this is relevant?) [ 7.862] (--) NVIDIA(0): Valid display device(s) on GeForce GTX 680 at PCI:2:0:0 [ 7.862] (--) NVIDIA(0): CRT-0 [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0) (boot, connected) [ 7.862] (--) NVIDIA(0): DFP-1 [ 7.862] (--) NVIDIA(0): DFP-2 [ 7.862] (--) NVIDIA(0): DFP-3 [ 7.862] (--) NVIDIA(0): DFP-4 [ 7.862] (--) NVIDIA(0): CRT-0: 400.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): Internal Dual Link TMDS [ 7.862] (--) NVIDIA(0): DFP-1: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-1: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-2: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-2: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-3: 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-3: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-4: 960.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-4: Internal DisplayPort

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Making a game "resize-safe"

    - by CPP_Person
    It's one thing to get the graphics aligned perfectly, it's another to do this for every single resolution and not take too much time and/or make the code unreadable due to size. Games like Battlefield 3 and Minecraft seem to manage this. But what do they do to keep things from stretching or going off the screen? I don't know any algorithms to do this. I'd like some help on this topic. I've always programmed games that only handle a single resolution, so help would be appreciate.

    Read the article

  • How to Use Windows 8's Storage Spaces to Mirror & Combine Drives

    - by Chris Hoffman
    “Storage Spaces” is a new feature in Windows 8 that can combine multiple hard drives into a single virtual drive. It can mirror data across multiple drives for redundancy or combine multiple physical drives into a single pool of storage. You can even create pools of storage larger than the amount of physical storage space you have available. When the physical storage fills up, you can plug in another drive and take advantage of it with no additional configuration required. Storage Spaces is similar to RAID or LVM on Linux. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Combine 3D objects in XNA 4

    - by Christoph
    Currently I am writing on my thesis for university, the theme I am working on is 3D Visualization of hierarchical structures using cone trees. I want to do is to draw a cone and arrange a number of spheres at the bottom of the cone. The spheres should be arranged according to the radius and the number of spheres correctly. As you can imagine I need a lot of these cone/sphere combinations. First Attempt I was able to find some tutorials that helped with drawing cones and spheres. Cone public Cone(GraphicsDevice device, float height, int tessellation, string name, List<Sphere> children) { //prepare children and calculate the children spacing and radius of the cone if (children == null || children.Count == 0) { throw new ArgumentNullException("children"); } this.Height = height; this.Name = name; this.Children = children; //create the cone if (tessellation < 3) { throw new ArgumentOutOfRangeException("tessellation"); } //Create a ring of triangels around the outside of the cones bottom for (int i = 0; i < tessellation; i++) { Vector3 normal = this.GetCircleVector(i, tessellation); // add the vertices for the top of the cone base.AddVertex(Vector3.Up * height, normal); //add the bottom circle base.AddVertex(normal * this.radius + Vector3.Down * height, normal); //Add indices base.AddIndex(i * 2); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 2) % (tessellation * 2)); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 3) % (tessellation * 2)); base.AddIndex((i * 2 + 2) % (tessellation * 2)); } //create flate triangle to seal the bottom this.CreateCap(tessellation, height, this.Radius, Vector3.Down); base.InitializePrimitive(device); } Sphere public void Initialize(GraphicsDevice device, Vector3 qi) { int verticalSegments = this.Tesselation; int horizontalSegments = this.Tesselation * 2; //single vertex on the bottom base.AddVertex((qi * this.Radius) + this.lowering, Vector3.Down); for (int i = 0; i < verticalSegments; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //Create a singe ring of latitudes for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex(normal * this.Radius, normal); } } // Finish with a single vertex at the top of the sphere. AddVertex((qi * this.Radius) + this.lowering, Vector3.Up); // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(CurrentVertex - 1); base.AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(CurrentVertex - 2 - i); } base.InitializePrimitive(device); } The tricky part now is to arrange the spheres at the bottom of the cone. I tried is to draw just the cone and then draw the spheres. I need a lot of these cones, so it would be pretty hard to calculate all the positions correctly. Second Attempt So the second try was to generate a object that builds all vertices of the cone and all of the spheres at once. So I was hoping to render a cone with all its spheres arranged correctly. After a short debug I found out that the cone is created and the first sphere, when it turn of the second sphere I am running into an OutOfBoundsException of ushort.MaxValue. Cone and Spheres public ConeWithSpheres(GraphicsDevice device, float height, float coneDiameter, float sphereDiameter, int coneTessellation, int sphereTessellation, int numberOfSpheres) { if (coneTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the cone. The number must be greater or equal to 3", coneTessellation)); } if (sphereTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the sphere. The number must be greater or equal to 3", sphereTessellation)); } //set properties this.Height = height; this.ConeDiameter = coneDiameter; this.SphereDiameter = sphereDiameter; this.NumberOfChildren = numberOfSpheres; //end set properties //generate the cone this.GenerateCone(device, coneTessellation); //generate the spheres //vector that defines the Y position of the sphere on the cones bottom Vector3 lowering = new Vector3(0, 0.888f, 0); this.GenerateSpheres(device, sphereTessellation, numberOfSpheres, lowering); } // ------ GENERATE CONE ------ private void GenerateCone(GraphicsDevice device, int coneTessellation) { int doubleTessellation = coneTessellation * 2; //Create a ring of triangels around the outside of the cones bottom for (int index = 0; index < coneTessellation; index++) { Vector3 normal = this.GetCircleVector(index, coneTessellation); //add the vertices for the top of the cone base.AddVertex(Vector3.Up * this.Height, normal); //add the bottom of the cone base.AddVertex(normal * this.ConeRadius + Vector3.Down * this.Height, normal); //add indices base.AddIndex(index * 2); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 2) % doubleTessellation); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 3) % doubleTessellation); base.AddIndex((index * 2 + 2) % doubleTessellation); } //create flate triangle to seal the bottom this.CreateCap(coneTessellation, this.Height, this.ConeRadius, Vector3.Down); base.InitializePrimitive(device); } // ------ GENERATE SPHERES ------ private void GenerateSpheres(GraphicsDevice device, int sphereTessellation, int numberOfSpheres, Vector3 lowering) { int verticalSegments = sphereTessellation; int horizontalSegments = sphereTessellation * 2; for (int childCount = 1; childCount < numberOfSpheres; childCount++) { //single vertex at the bottom of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Down); for (int verticalSegmentsCount = 0; verticalSegmentsCount < verticalSegments; verticalSegmentsCount++) { float latitude = ((verticalSegmentsCount + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //create a single ring of latitudes for (int horizontalSegmentsCount = 0; horizontalSegmentsCount < horizontalSegments; horizontalSegmentsCount++) { float longitude = horizontalSegmentsCount * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex((normal * this.SphereRadius) + lowering, normal); } } //finish with a single vertex at the top of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Up); //create a fan connecting the bottom vertex to the bottom latitude ring for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(0); base.AddIndex(1 + (i + 1) % horizontalSegments); base.AddIndex(1 + i); } //Fill the sphere body with triangles joining each pair of latitude rings for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } //create a fan connecting the top vertiex to the top latitude for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(this.CurrentVertex - 1); base.AddIndex(this.CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(this.CurrentVertex - 2 - i); } base.InitializePrimitive(device); } } Any ideas how I could fix this?

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • JavaOne - Java SE Embedded Booth - Digi - Home Health Hub (HHH)

    - by David Clack
    Hi All,  So another exciting platform we will have in the booth at JavaOne is the Digi  Home Health Hub (HHH) platform. http://www.digi.com/products/wireless-wired-embedded-solutions/single-board-computers/idigi-telehealth-application-kit#overview This is a Freescale reference design that has been built by Digi, the system is powered by a Freescale i.MX28 ARM SOC, what's really exciting me is it has every wireless protocol you could ever want on a single motherboard. Ethernet, 802.11b/g/n Wi-Fi, Bluetooth, ZigBee, configurable Sub-GHz radio, NFC plus USB, audio and LCD/touch screen option. I've been experimenting with lots of wireless capable healthcare products in the last few months, plus some Bluetooth Pulse / Oxy meters, we have been looking at how the actual healthcare wireless protocols work. Steve Popovich - Vice President, Digi Internationalwill be doing a talk at the Java Embedded @ JavaOne conference in the Hotel Nikko, right next door to the JavaOne show in the Hilton. If you are registered at JavaOne you can come over to the Java Embedded @ JavaOne for $100 Come see us in booth 5605 See you there Dave

    Read the article

  • What's the best way to create animations when doing Android development?

    - by Adam Smith
    I'm trying to create my first Android game and I'm currently trying to figure out (with someone that will do the drawings and another programmer) what the best way to create animation is. (Animations such as a character moving, etc.) At first, the designer said that she could draw objects/characters and animate them with flash so she didn't have to draw every single frame of an action. The other programmer and I don't know Flash too much so I suggested extracting all the images from the Flash animation and making them appear one after the other when the animation is to start. He said that would end up taking too much resource on the CPU and I tend to agree, but I don't really see how we're supposed to make smooth animations without it being too hard on the hardware and, if possible, not have the designer draw every single frame on Adobe Illustrator. Can an experienced Android game developper help me balance this out so we can move on to other parts of the game as I have no idea what the best way to create animations is.

    Read the article

  • Box2D physics editor for complex bodies

    - by Paul Manta
    Is there any editor out there that would allow me to define complex entities, with joins connecting their multiple bodies, instead of regular single body entities? For example, an editor that would allow me to 'define' a car as having a main body with two circles as wheels, connected through joints. Clarification: I realize I haven't been clear enough about what I need. I'd like to make my engine data-driven, so all entities (and therefore their Box2D bodies) should be defined externally, not in code. I'm looking for a program like Code 'N' Web's PhysicsEditor, except that one only handles single body entities, no joints or anything like that. Like PhysicsEditor, the program should be configurable so that I can save the data in whatever format I want to. Does anyone know of any such software?

    Read the article

  • Invisible boundary on Synergy client

    - by Jayson Rowe
    I have a Desktop system (named 'desktop') with 2 monitors attached (one 1680x1050 and another 1920x1080 resolution), running a synergy server. I also have a small ITX machine (named 'tiny') to my right that has a single monitor running at 1280x1024. The dual-monitor desktop is the synergy "server" and the single monitor system is the "client". Synergy works fine with one exception; if I move the mouse to the client, I can't move my mouse below the top two-thirds of the screen - it just stops. I can grab a mouse attached to the computer, and it moves all the way down, but as soon as I touch the mouse attached to the server, it jumps back to the top of the screen. Is there an issue with my config? section: screens desktop: tiny: end section: links desktop: right = tiny tiny: left = desktop end Thanks in advance for any suggestions. +----------------------------+-----------------------------+ | | | | | | | | +-----------------+ | | | | | desktop 1680x1050 | desktop 1920x1080 | | | | | | | | | tiny 1280x1024 | | | |+---------------+| | | |XXXXXXXXXXXXXXXXX| +----------------------------+-----------------------------+-----------------+

    Read the article

  • Maximum number of controllers Unity3D can handle

    - by N0xus
    I've been trying to find out the maximum amount of xbox controller Unity3D can handle on one editor. I know through networking, Unity is capable of having as many people as your hardware can handle. But I want to avoid networking as much as possible. Thus, on a single computer, and in a single screen (think Bomberman and Super Smash Brothers) how many xbox controllers can Unity3D support? I have done work in XNA and remember that only being capable of support 4, but for the life of me, I can't find any information that tells me how many Unity can support.

    Read the article

  • Is it a bad idea to run SELinux and AppArmor at the same time?

    - by jgbelacqua
    My corporate policy says that Linux boxes must be secured with SELinux (so that a security auditor can check the 'yes, we're extremely secure!' checkbox for each server). I had hoped to take advantage of Ubuntu's awesome default AppArmor security. Is it unwise to run both Apparmor and SELinux? (If so, can this bad idea be mitigated with some apparmor and/or selinux tweaks?) Update 1/28 -- Kees Cook has pointed out in his answer the dead simple reason why it's a bad idea to run both -- the Linux kernel says you can't1. [ 1 More precisely, the Linux Security Modules interface framework is designed for a single running implementation, and does not support more than a single running implementation. ] Update 1/27 -- I've accepted the answer from kenny.r , though I would be happier with some more technical reasons of why this would fail, or examples of actual conflicts that this would cause.

    Read the article

  • SQL SERVER – Retrieving Random Rows from Table Using NEWID()

    - by pinaldave
    I have previously written about how to get random rows from SQL Server. SQL SERVER – Generate A Single Random Number for Range of Rows of Any Table – Very interesting Question from Reader SQL SERVER – Random Number Generator Script – SQL Query However, I have not blogged about following trick before. Let me share the trick here as well. You can generate random scripts using following methods as well. USE AdventureWorks2012 GO -- Method 1 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY NEWID() GO -- Method 2 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY CHECKSUM(NEWID()) GO You will notice that using NEWID() in the ORDER BY will return random rows in the result set. How many of you knew this trick? You can run above script multiple times and it will give random rows every single time. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is conditional return type ever a good idea?

    - by qegal
    So I have a method that's something like this: -(BOOL)isSingleValueRecord And another method like this: -(Type)typeOfSingleValueRecord And it occurred to me that I could combine them into something like this: -(id)isSingleValueRecord And have the implementation be something like this: -(id)isSingleValueRecord { //If it is single value if(self.recordValue = 0) { //Do some stuff to determine type, then return it return typeOfSingleValueRecord; } //If its not single value else { //Return "NO" return [NSNumber numberWithBool:NO]; } } So combining the two methods makes it more efficient but makes the readability go down. In my gut, I feel like I should go with the two-method version, but is that really right? Is there any case that I should go with the combined version?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >