Search Results

Search found 4999 results on 200 pages for 'derived instances'.

Page 112/200 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Is there a distributed project management software like Redmine?

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. So, can you recommend me a distributed project management software? If your suggestion is a software that usually runs on a server please include a description of the distribution method (e.g. whether simply putting the data in a git repository would do the trick), and if it's e.g. trac, please mention plugins required to include the features mentioned.

    Read the article

  • Troubleshooting EBS Discovery Issues - Part 2

    - by Kenneth E.
    Part 1 of “Troubleshooting EBS Discovery Issues”, which was posted on May 17th, focused on the diagnostics associated with the initial discovery of an EBS instances (e.g., Forms servers, APPL_TOPs, databases, etc.).Part 2 focuses on verifying parameters of the Change Management features, also called "Pack Diagnostics, specifically for Customization Manager, Patch Manager, Setup Manager, Automated Cloning, and User Monitoring.  As stated in the first post, there can be numerous reasons that Discovery fails - credentials, file-level permissions, patch levels - just to name a few.The Discovery Wizard can be accessed from the EBS homepage.  From the home page, click "Pack Diagnostics"Click "Create" to define the diagnostic processProvide the required inputs; Name, Module (i.e., Customization Manager, Patch Manager, Setup Manager, Automated Cloning, and User Monitoring), Show Details (typically "All"), and Category (typically check both Generic and User Specific).  Add the appropriate targets.Once the diagnostic process has completed, view the results.  Click on "Succeeded" or "Failed" in the Status column.Expand the entire tree and click on each "Succeeded/Failed" status to see the results of each test within that task.Sample output verifying o/s user name and required patches Additional sample output showing a failed testComplete descriptions of, as well as recommended corrective actions for, all of the diagnostic tests that are run in EM 12c is found in this spreadsheet.  Additional information can be found in the Application Management Pack User Guide.

    Read the article

  • Why do you hate Java? Is it the language or the framework? [closed]

    - by zneak
    According to you all, Java is the third most-hated language here. The two other most hated languages are PHP and VBScript. (It's quite funny how they stand together on the podium.) I'd like to make it known that the question mostly addresses people who don't like Java. I assume here a number of subjective opinions as facts because they're usually considered true among people who don't like Java, and I don't want to be convinced otherwise here. If you're a Java enthusiast, you might find this question frustrating. It's never been made clear if people hate Java itself, or if they hate it because of the framework, or if it's a mixture of the two. On a side you have the language, where you have: the "everything should be an object" philosophy, even in instances where it should obviously be something else (event handlers I'm pointing you); checked exceptions; the idea that all logic should be presented as methods and properties is a big no-no; the fact that "closures" created by anonymous types only include final variables and arguments, but will allow write access to any member of the parent class; a few more. On the other side, you have the JDK, with... its load of inconsistencies and overengineering; monolithic class hierarchies; meaningless base exceptions like IOException (though other frameworks have similar exception hierarchies); sluggish responsiveness even with Swing; a few more. My question is, do you think that, if either one (Java or the JDK) was taken alone, and the other was dropped in favor of something else, the new combination would be better? For instance, if you could use the C# syntax with the JDK (adapting get*/set* methods into properties, and interfaces with only one method into delegates), or the Java syntax with the .NET Framework (doing the inverse transformations), would things get better in your opinion?

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • Software center is broken

    - by Colin
    When I started installing the Humble Indie Bundle 5 games the software center stopped working and now I get this error. Packages cannot be installed or removed, click here to repair. Which fails and gives these results. installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 255502 files and directories currently installed.) Unpacking libqtcore4:i386 (from .../libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb (--unpack): conffile './etc/xdg/Trolltech.conf' is not in sync with other instances of the same package No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb Error in function: dpkg: dependency problems prevent configuration of libqtgui4:i386: libqtgui4:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqtgui4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-sql:i386: libqt4-sql:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-sql:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of ia32-libs-multiarch:i386: ia32-libs-multiarch:i386 depends on libqt4-sql; however: Package libqt4-sql:i386 is not configured yet. ia32-libs-multiarch:i386 depends on libqtcore4; however: Package libqtcore4:i386 is not installed. ia32-libs-multiarch:i386 depends on libqtgui4; however: Package libqtgui4:i386 is not configured yet. dpkg: error processing ia32-libs-multiarch:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-declarative:i386: libqt4-declarative:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-declarative:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-declarative:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-declarative:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg:i386: libqt4-svg:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-svg:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-svg:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-network:i386: libqt4-network:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-network:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-sql-mysql:i386: libqt4-sql-mysql:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-sql-mysql:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-sql-mysql:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-script:i386: libqt4-script:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-script:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-dbus:i386: libqt4-dbus:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-dbus:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl:i386: libqt4-opengl:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-opengl:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-opengl:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqtwebkit4:i386: libqtwebkit4:i386 depends on libqt4-network (>= 4:4.8.0~); however: Package libqt4-network:i386 is not configured yet. libqtwebkit4:i386 depends on libqtcore4 (>= 4:4.8.0~); however: Package libqtcore4:i386 is not installed. libqtwebkit4:i386 depends on libqtgui4 (>= 4:4.8.0); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqtwebkit4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer:i386: libqt4-designer:i386 depends on libqt4-script (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-script:i386 is not configured yet. libqt4-designer:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-designer:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-designer:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of lonesurvivor-bin:i386: lonesurvivor-bin:i386 depends on ia32-libs-multiarch; however: Package ia32-libs-multiarch:i386 is not configured yet. dpkg: error processing lonesurvivor-bin:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of lonesurvivor: lonesurvivor depends on lonesurvivor-bin (= 1.11d-0ubuntu5); however: Package lonesurvivor-bin is not installed. Package lonesurvivor-bin:i386 is not configured yet. dpkg: error processing lonesurvivor (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-scripttools:i386: libqt4-scripttools:i386 depends on libqt4-script (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-script:i386 is not configured yet. libqt4-scripttools:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-scripttools:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-scripttools:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-qt3support:i386: libqt4-qt3support:i386 depends on libqt4-designer (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-designer:i386 is not configured yet. libqt4-qt3support:i386 depends on libqt4-network (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-network:i386 is not configured yet. libqt4-qt3support:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-qt3support:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-qt3support:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-qt3support:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-xml:i386: libqt4-xml:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-xml:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-test:i386: libqt4-test:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-test:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-xmlpatterns:i386: libqt4-xmlpatterns:i386 depends on libqt4-network (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-network:i386 is not configured yet. libqt4-xmlpatterns:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-xmlpatterns:i386 (--configure): dependency problems - leaving unconfigured I couldn't get the apt-get update or upgrade to work either so I shut off the repositories and updated / upgraded one at a time without any problems. But that didn't fix the Software Center. Help would be greatly appreciated. ADDED UPDATE I've tried to install aptitude using dpkg but can't. I have also tried sudo apt-get dist-upgrade sudo apt-get autoremove sudo dpkg --configure -a --force-all and the -f options. Though I believe this is where my problem is originating: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libqtcore4:i386 The following NEW packages will be installed: libqtcore4:i386 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,061 kB of archives. After this operation, 9,041 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 255526 files and directories currently installed.) Unpacking libqtcore4:i386 (from .../libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb (--unpack): conffile './etc/xdg/Trolltech.conf' is not in sync with other instances of the same package Errors were encountered while processing: /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I hope this helps narrow it down some. SECOND UPDATE sudo apt-get --reinstall install software-center -f The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). sudo apt-get -f install doesn't work either. Complete output of terminal for step 5 can be found here, http://paste.ubuntu.com/1066192/

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • How to optimise Andengine's PathModifer (with singleton or pool)?

    - by Casla
    I am trying to build a game where the character find and follows a new path when a new destination is issued by the player, kinda like how units in RTS games work. This is done on a TMX map and I am using the A Star path finder utilities in Andengine to do this.David helped me on that: How can I change the path a sprite is following in real time? At the moment, every-time a new path is issued, I have to abandon the existing PathModifer and Path instances, and create new ones, and from what I read so far, creating new objects when you could re-use existing ones are a big waste for mobile applications. This is how I coded it at the moment: private void loadPathFound() { if (mAStarPath != null) { modifierPath = new org.andengine.entity.modifier.PathModifier.Path(mAStarPath.getLength()); /* replace the first node in the path as the player's current position */ modifierPath.to(player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_X]-12, player.convertLocalToSceneCoordinates(12, 31)[Constants.VERTEX_INDEX_Y]-31); for (int i=1; i<mAStarPath.getLength(); i++) { modifierPath.to(mAStarPath.getX(i)*TILE_WIDTH, mAStarPath.getY(i)*TILE_HEIGHT); /* passing in the duration depended on the length of the path, so that the animation has a constant duration for every step */ player.registerEntityModifier(new PathModifier(modifierPath.getLength()/100, modifierPath, null, mIPathModifierListener)); } } The ideal implementation will be to always have just one object of PathModifer and just reset the destination of the path. But I don't know how you can apply the singleton patther on Andengine's PathModifer, there is no method to reset attribute of the path nor the pathModifer. So without re-write the PathModifer and the Path class, or use reflection, is there any other way to implement singleton PathModifer? Thanks for your help.

    Read the article

  • Am I missing a pattern?

    - by Ryan Pedersen
    I have a class that is a singleton and off of the singleton are properties that hold the instances of all the performance counters in my application. public interface IPerformanceCounters { IPerformanceCounter AccountServiceCallRate { get; } IPerformanceCounter AccountServiceCallDuration { get; } Above is an incomplete snippet of the interface for the class "PerformanceCounters" that is the singleton. I really don't like the plural part of the name and thought about changing it to "PerformanceCounterCollection" but stopped because it really isn't a collection. I also thought about "PerformanceCounterFactory" but it is really a factory either. After failing with these two names and a couple more that aren't worth mentioning I thought that I might be missing a pattern. Is there a name that make sense or a change that I could make towards a standardized pattern that would help me put some polish on this object and get rid of the plural name? I understand that I might be splitting hairs here but that is why I thought that the "Programmers" exchange was the place for this kind of thing. If it is not... I am sorry and I will not make that mistake again. Thanks!

    Read the article

  • Why does my browser take me to Scour.com? (redirect virus)

    - by Paula DiTallo
    The "scour" or Rootkit.Win32.TDSS virus has a long history which can be found here: http://en.wikipedia.org/wiki/Scour Here is the primary symptom: after searching for something in your web browser using google, one of the results that you click on redirects you to scour.com. If you've executed ClamWin, Malwarebytes, McAfee, Norton, etc. to find and isolate the virus without any luck--this isn't really a surprise, since this virus attaches to existing system drivers. I only know of one reliable package that will remove this without ill effects--like adding new spyware. This package is called TDSSKiller. I have seen multiple websites that claim to have this software available, but the one that I know is reliable is located here: http://support.kaspersky.com/viruses/solutions?qid=208280684 Once you go to Kaspersky's tech support site, the TDSSKiller zip file is available for downloading. When you execute this software, you will be able to "cure" or repair the infected driver. Remember to jot down the name of the driver for future reference--should you need to reinstall the driver from a "same-as" working computer, or your install disk if the repair is ineffective. The driver that happened to get infected on my computer was the tcpip.sys driver. This caused my win sockets to loose their ip addresses. In most other instances, less critical drivers such as HDAudBus.sys are infected. In my case, I was not through correcting my computer problems until I corrected the broken WinSock issue and loaded an earlier version of the tcpip.sys driver from: C:\WINDOWS\ServicePackFiles\i386 which I placed in: C:\WINDOWS\system32\drivers Don't forget to reboot your computer after your repair! Once you download TDSSKiller and cure/repair your infected driver(s), the redirect on google searches should disappear .

    Read the article

  • C#/.NET Little Wonders: Use Cast() and TypeOf() to Change Sequence Type

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. We’ve seen how the Select() extension method lets you project a sequence from one type to a new type which is handy for getting just parts of items, or building new items.  But what happens when the items in the sequence are already the type you want, but the sequence itself is typed to an interface or super-type instead of the sub-type you need? For example, you may have a sequence of Rectangle stored in an IEnumerable<Shape> and want to consider it an IEnumerable<Rectangle> sequence instead.  Today we’ll look at two handy extension methods, Cast<TResult>() and OfType<TResult>() which help you with this task. Cast<TResult>() – Attempt to cast all items to type TResult So, the first thing we can do would be to attempt to create a sequence of TResult from every item in the source sequence.  Typically we’d do this if we had an IEnumerable<T> where we knew that every item was actually a TResult where TResult inherits/implements T. For example, assume the typical Shape example classes: 1: // abstract base class 2: public abstract class Shape { } 3:  4: // a basic rectangle 5: public class Rectangle : Shape 6: { 7: public int Widtgh { get; set; } 8: public int Height { get; set; } 9: } And let’s assume we have a sequence of Shape where every Shape is a Rectangle… 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: // ... 6: }; To get the sequence of Shape as a sequence of Rectangle, of course, we could use a Select() clause, such as: 1: // select each Shape, cast it to Rectangle 2: var rectangles = shapes 3: .Select(s => (Rectangle)s) 4: .ToList(); But that’s a bit verbose, and fortunately there is already a facility built in and ready to use in the form of the Cast<TResult>() extension method: 1: // cast each item to Rectangle and store in a List<Rectangle> 2: var rectangles = shapes 3: .Cast<Rectangle>() 4: .ToList(); However, we should note that if anything in the list cannot be cast to a Rectangle, you will get an InvalidCastException thrown at runtime.  Thus, if our Shape sequence had a Circle in it, the call to Cast<Rectangle>() would have failed.  As such, you should only do this when you are reasonably sure of what the sequence actually contains (or are willing to handle an exception if you’re wrong). Another handy use of Cast<TResult>() is using it to convert an IEnumerable to an IEnumerable<T>.  If you look at the signature, you’ll see that the Cast<TResult>() extension method actually extends the older, object-based IEnumerable interface instead of the newer, generic IEnumerable<T>.  This is your gateway method for being able to use LINQ on older, non-generic sequences.  For example, consider the following: 1: // the older, non-generic collections are sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 13 }, 5: new Rectangle { Width = 10, Height = 20 }, 6: // ... 7: }; Since this is an older, object based collection, we cannot use the LINQ extension methods on it directly.  For example, if I wanted to query the Shape sequence for only those Rectangles whose Width is > 5, I can’t do this: 1: // compiler error, Where() operates on IEnumerable<T>, not IEnumerable 2: var bigRectangles = shapes.Where(r => r.Width > 5); However, I can use Cast<Rectangle>() to treat my ArrayList as an IEnumerable<Rectangle> and then do the query! 1: // ah, that’s better! 2: var bigRectangles = shapes.Cast<Rectangle>().Where(r => r.Width > 5); Or, if you prefer, in LINQ query expression syntax: 1: var bigRectangles = from s in shapes.Cast<Rectangle>() 2: where s.Width > 5 3: select s; One quick warning: Cast<TResult>() only attempts to cast, it won’t perform a cast conversion.  That is, consider this: 1: var intList = new List<int> { 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; 2:  3: // casting ints to longs, this should work, right? 4: var asLong = intList.Cast<long>().ToList(); Will the code above work?  No, you’ll get a InvalidCastException. Remember that Cast<TResult>() is an extension of IEnumerable, thus it is a sequence of object, which means that it will box every int as an object as it enumerates over it, and there is no cast conversion from object to long, and thus the cast fails.  In other words, a cast from int to long will succeed because there is a conversion from int to long.  But a cast from int to object to long will not, because you can only unbox an item by casting it to its exact type. For more information on why cast-converting boxed values doesn’t work, see this post on The Dangers of Casting Boxed Values (here). OfType<TResult>() – Filter sequence to only items of type TResult So, we’ve seen how we can use Cast<TResult>() to change the type of our sequence, when we expect all the items of the sequence to be of a specific type.  But what do we do when a sequence contains many different types, and we are only concerned with a subset of a given type? For example, what if a sequence of Shape contains Rectangle and Circle instances, and we just want to select all of the Rectangle instances?  Well, let’s say we had this sequence of Shape: 1: var shapes = new List<Shape> 2: { 3: new Rectangle { Width = 3, Height = 5 }, 4: new Rectangle { Width = 10, Height = 13 }, 5: new Circle { Radius = 10 }, 6: new Square { Side = 13 }, 7: // ... 8: }; Well, we could get the rectangles using Select(), like: 1: var onlyRectangles = shapes.Where(s => s is Rectangle).ToList(); But fortunately, an easier way has already been written for us in the form of the OfType<T>() extension method: 1: // returns only a sequence of the shapes that are Rectangles 2: var onlyRectangles = shapes.OfType<Rectangle>().ToList(); Now we have a sequence of only the Rectangles in the original sequence, we can also use this to chain other queries that depend on Rectangles, such as: 1: // select only Rectangles, then filter to only those more than 2: // 5 units wide... 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); The OfType<Rectangle>() will filter the sequence to only the items that are of type Rectangle (or a subclass of it), and that results in an IEnumerable<Rectangle>, we can then apply the other LINQ extension methods to query that list further. Just as Cast<TResult>() is an extension method on IEnumerable (and not IEnumerable<T>), the same is true for OfType<T>().  This means that you can use OfType<TResult>() on object-based collections as well. For example, given an ArrayList containing Shapes, as below: 1: // object-based collections are a sequence of object 2: var shapes = new ArrayList 3: { 4: new Rectangle { Width = 3, Height = 5 }, 5: new Rectangle { Width = 10, Height = 13 }, 6: new Circle { Radius = 10 }, 7: new Square { Side = 13 }, 8: // ... 9: }; We can use OfType<Rectangle> to filter the sequence to only Rectangle items (and subclasses), and then chain other LINQ expressions, since we will then be of type IEnumerable<Rectangle>: 1: // OfType() converts the sequence of object to a new sequence 2: // containing only Rectangle or sub-types of Rectangle. 3: var onlyBigRectangles = shapes.OfType<Rectangle>() 4: .Where(r => r.Width > 5) 5: .ToList(); Summary So now we’ve seen two different ways to get a sequence of a superclass or interface down to a more specific sequence of a subclass or implementation.  The Cast<TResult>() method casts every item in the source sequence to type TResult, and the OfType<TResult>() method selects only those items in the source sequence that are of type TResult. You can use these to downcast sequences, or adapt older types and sequences that only implement IEnumerable (such as DataTable, ArrayList, etc.). Technorati Tags: C#,CSharp,.NET,LINQ,Little Wonders,TypeOf,Cast,IEnumerable<T>

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • Java web app, with plugin framework and ability to connect to source for updates

    - by lessthancommon
    I've searched all around for some good sources, but either have been searching for the wrong keywords, or I'm just missing something. I'm looking to redevelop a web app I've been using for some time now. Many parts are out of date, and we're constantly throwing in little hacks to attempt to give it new life. So what I'd like to do is re-engineer it from the ground up, built on some sort of plug-in framework. Before I continue, I'm more or less an intermediate Java programmer. In some ways, I'm hoping to use this project as a big learning experience. I've read a lot about OSGi, and it seems that's the most complete framework. Ideally, I would like an end result web app which I can run one instance as my hosting environment, and other instances can connect to it to grab new and updated plug-ins. Eventually I'll want to lock down these plug-ins based on some undecided criteria of who can get them (basically some will simply be updates, others will provide new functionality and should be "purchased" through an external system). But that will probably be handled in a later phase. There should be an administration view for managing bundles in a hot environment (looking to avoid having to restart the server for an update). I know all these things are possible, I'm just trying to find some good resources for reference. All the OSGi tutorials I'm finding seem to be too simplistic. If anyone here can guide me in the right direction on any or all of the items I'm looking for, it would be much appreciated. Also, this is my first post, so I'll take any comments/criticisms about the content of my post. Thanks!

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Lenovo ThinkPad L520 slows down when AC power adapter is plugged in

    - by Aamir
    I have a new laptop Lenovo ThinkPad L520 (7859-5BG) Core i5-2520M(2.5GHz) with 4GB RAM. Having installed Ubuntu 11.10 32-bit, while browsing with Chrome on GNOME classic (no effects), I noticed 173% CPU usage by chrome browser process, and the system slowly got very very slow, Now, at this stage as I removed the power adapter, the system suddenly got faster (and stopped the lagging behavior) and CPU usage drops down to 48% !! Observation 1: I was browsing through chrome when my system seemed to be seriously lagging, so I killed chrome to see if it gets any faster. But there remained no difference. Notice that CPU usage was a bit strange here. It showed no high activity, but as soon as I would click on applications in gnome panel, it would shoot CPU usage to 70, or 80 or 90 or 143% etc. depending on how quickly i clicked back and forth. At this instance I removed by AC adapter of my laptop, and suddenly system got fine. So i again clicked on gnome panel, and noticed that it now took only 7% or 12% or 13% at max, with same kind of clicks in application menu. Observation 2: At the other times, with AC adapter plugged in, top indicates four instances of chromium taking 90%, 60%, 47% and 2% (for example), and then once I take out the AC adapter same processes take lesser CPU all of a sudden Intermediate conclusions: What does this indicate ? I cannot figure out any "other" process in "top" that is suddenly being triggered, its the same process that hogs up my CPU once AC power is plugged in ! NOTE: the problem is now CONFIRMED, as i can repeat that when I have power adapter plugged in ! Can anyone tell me what exactly does this indicate ? What is wrong, is it some bug with power management or what ?

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • Economical DNS hosting separate from local registrar for country specific TLDs - email or web hosting not required

    - by Eric Nguyen
    Our company owns many country specific top level domains (TLDs; .sg, .my). We will purchase more for other countries in all South East Asia. These domains are associated with our websites hosted on Amazon EC2. The DNS records are currently hosted on a dedicated server that will shut down tomorrow. (The name servers are set to the ones of a web hosting company) Therefore, I will need to host the DNS records somewhere else. Hosting the DNS records with the local registrar costs SGD18 a year per domain in addition to the domain price (which is already very expensive but we have no choice). It would be convenience to host DNS recors for all the country specific TLDs we have using a single service, separate from the local registrars from which we bought the domains. A few searches prompted examples like Amazon Route 53 and dnsmadeeasy.com and the likes. However, since I'm only concern about the country specific TLDs, not .com 1) Is it really economical to host DNS records of all domains in 1 single place as described above? (Have the relevant countries and/or the local registrars done something to keep their monopoly and always charges ridiculous prices for their country specific TLDs?) 2) I would imagine I will need to tell the local registrars to update the name servers to those of the DNS hosting service provider e.g. dnsmadeeasy.com here. Am I correct about how it works here? 3) Will I be able point the TLDs themselves to IP addresses I desire (the EC2 instances where my websites are) or will I only able to do so with the subdomains? 4) Are there any drawbacks that I should know here? Background about our needs: We need the websites associated with the country TLDs to be up and running all the times Also, we'll need to be able to add/edit A and CNAME records We use Google Apps for Business for internal email so I will need to be able to add/edit MX records and TXT records

    Read the article

  • Firefox dependencies problem on 12.10

    - by theshu
    I did a fresh install of Ubuntu 12.10 today and now am getting the following error: You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: firefox-globalmenu : Depends: firefox (= 17.0.1+build1-0ubuntu0.12.10.1) but 16.0.1+build1-0ubuntu1 is installed E: Unmet dependencies. Try using -f. When I run the sudo apt-get -f install...I get the following after I type the "y": (Reading database ... 179765 files and directories currently installed.) Preparing to replace firefox 16.0.1+build1-0ubuntu1 (using .../firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb) ... Unpacking replacement firefox ... dpkg-deb (subprocess): decompressing archive member: internal gzip read error: '<fd:4>: incorrect data check' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb (--unpack): cannot copy extracted data for './etc/apparmor.d/usr.bin.firefox' to '/etc/apparmor.d/usr.bin.firefox.dpkg-new': unexpected end of file or stream Please restart all running instances of firefox, or you will experience problems. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/firefox_17.0.1+build1-0ubuntu0.12.10.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) This error is preventing me from installing Synaptic package manager and is also preventing me from installing anything with the Software center or Update manager. I tried uninstalling Firefox and also upgrading Firefox to no avail as well. Open to suggestions.

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >