Search Results

Search found 5648 results on 226 pages for 'collection'.

Page 82/226 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Securing WebSocket applications on Glassfish

    - by Pavel Bucek
    Today we are going to cover deploying secured WebSocket applications on Glassfish and access to these services using WebSocket Client API. WebSocket server application setup Our server endpoint might look as simple as this: @ServerEndpoint("/echo") public class EchoEndpoint { @OnMessage   public String echo(String message) {     return message + " (from your server)";   } } Everything else must be configured on container level. We can start with enabling SSL, which will require web.xml to be added to your project. For starters, it might look as following: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee">   <security-constraint>     <web-resource-collection>       <web-resource-name>Protected resource</web-resource-name>       <url-pattern>/*</url-pattern>       <http-method>GET</http-method>     </web-resource-collection>     <!-- https -->     <user-data-constraint>       <transport-guarantee>CONFIDENTIAL</transport-guarantee>     </user-data-constraint>   </security-constraint> </web-app> This is minimal web.xml for this task - web-resource-collection just defines URL pattern and HTTP method(s) we want to put a constraint on and user-data-constraint defines that constraint, which is in our case transport-guarantee. More information about these properties and security settings for web application can be found in Oracle Java EE 7 Tutorial. I have some simple webpage attached as well, so I can test my endpoint right away. You can find it (along with complete project) in Tyrus workspace: [webpage] [whole project]. After deploying this application to Glassfish Application Server, you should be able to hit it using your favorite browser. URL where my application resides is https://localhost:8181/sample-echo-https/ (may be different, depends on other configuration). My browser warns me about untrusted certificate (I use what freshly built Glassfish provides - self signed certificates) and after adding an exception for this site, I can see my webpage and I am able to securely connect to wss://localhost:8181/sample-echo-https/echo. WebSocket client Already mentioned demo application also contains test client, but execution of this is skipped for normal build. Reason for this is that Glassfish uses these self-signed "random" untrusted certificates and you are (in most cases) not able to connect to these services without any additional settings. Creating test WebSocket client is actually quite similar to server side, only difference is that you have to somewhere create client container and invoke connect with some additional info. Java API for WebSocket allows you to use annotated and programmatic way to construct endpoints. Server side shows the annotated case, so let's see how the programmatic approach will look. final WebSocketContainer client = ContainerProvider.getWebSocketContainer(); client.connectToServer(new Endpoint() {   @Override   public void onOpen(Session session, EndpointConfig EndpointConfig) {     try {       // register message handler - will just print out the       // received message on standard output.       session.addMessageHandler(new MessageHandler.Whole<String>() {       @Override         public void onMessage(String message) {          System.out.println("### Received: " + message);         }       });       // send a message       session.getBasicRemote().sendText("Do or do not, there is no try.");     } catch (IOException e) {       // do nothing     }   } }, ClientEndpointConfig.Builder.create().build(),    URI.create("wss://localhost:8181/sample-echo-https/echo")); This client should work with some secured endpoint with valid certificated signed by some trusted certificate authority (you can try that with wss://echo.websocket.org). Accessing our Glassfish instance will require some additional settings. You can tell Java which certificated you trust by adding -Djavax.net.ssl.trustStore property (and few others in case you are using linked sample). Complete command line when you are testing your service might need to look somewhat like: mvn clean test -Djavax.net.ssl.trustStore=$AS_MAIN/domains/domain1/config/cacerts.jks\ -Djavax.net.ssl.trustStorePassword=changeit -Dtyrus.test.host=localhost\ -DskipTests=false Where AS_MAIN points to your Glassfish instance. Note: you might need to setup keyStore and trustStore per client instead of per JVM; there is a way how to do it, but it is Tyrus proprietary feature: http://tyrus.java.net/documentation/1.2.1/user-guide.html#d0e1128. And that's it! Now nobody is able to "hear" what you are sending to or receiving from your WebSocket endpoint. There is always room for improvement, so the next step you might want to take is introduce some authentication mechanism (like HTTP Basic or Digest). This topic is more about container configuration so I'm not going to go into details, but there is one thing worth mentioning: to access services which require authorization, you might need to put this additional information to HTTP headers of first (Upgrade) request (there is not (yet) any direct support even for these fundamental mechanisms, user need to register Configurator and add headers in beforeRequest method invocation). I filed related feature request as TYRUS-228; feel free to comment/vote if you need this functionality.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Customizing / overriding ComboBox (2 replies)

    Hi, I would override a Windows.Forms.Combobox to have a MyObjectCollection instead of a ObjectCollection as Items property. I've try writing this code, but it seems that items are stored somewhere else (not in Items property). I can add items to collection but when I try to select an item from the combobox (at runtime) an exception tells me that there is no such element in Items (litterally it tel...

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • Adding complexity to remove duplicate code

    - by Phil
    I have several classes that all inherit from a generic base class. The base class contains a collection of several objects of type T. Each child class needs to be able to calculate interpolated values from the collection of objects, but since the child classes use different types, the calculation varies a tiny bit from class to class. So far I have copy/pasted my code from class to class and made minor modifications to each. But now I am trying to remove the duplicated code and replace it with one generic interpolation method in my base class. However that is proving to be very difficult, and all the solutions I have thought of seem way too complex. I am starting to think the DRY principle does not apply as much in this kind of situation, but that sounds like blasphemy. How much complexity is too much when trying to remove code duplication? EDIT: The best solution I can come up with goes something like this: Base Class: protected T GetInterpolated(int frame) { var index = SortedFrames.BinarySearch(frame); if (index >= 0) return Data[index]; index = ~index; if (index == 0) return Data[index]; if (index >= Data.Count) return Data[Data.Count - 1]; return GetInterpolatedItem(frame, Data[index - 1], Data[index]); } protected abstract T GetInterpolatedItem(int frame, T lower, T upper); Child class A: public IGpsCoordinate GetInterpolatedCoord(int frame) { ReadData(); return GetInterpolated(frame); } protected override IGpsCoordinate GetInterpolatedItem(int frame, IGpsCoordinate lower, IGpsCoordinate upper) { double ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var x = GetInterpolatedValue(lower.X, upper.X, ratio); var y = GetInterpolatedValue(lower.Y, upper.Y, ratio); var z = GetInterpolatedValue(lower.Z, upper.Z, ratio); return new GpsCoordinate(frame, x, y, z); } Child class B: public double GetMph(int frame) { ReadData(); return GetInterpolated(frame).MilesPerHour; } protected override ISpeed GetInterpolatedItem(int frame, ISpeed lower, ISpeed upper) { var ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var mph = GetInterpolatedValue(lower.MilesPerHour, upper.MilesPerHour, ratio); return new Speed(frame, mph); }

    Read the article

  • Oracle ADF PMs are UKOUG Conference 2012

    - by Grant Ronald
    Next week we'll have a (what is a collection of PM's called, flock? gaggle?) of ADF Product Managers attending the UKOUG conference in Birmingham.  Myself, Frank Nimphius, Chris Muir, Susan Duncan, Frederic Desbiens and Duncan Mills will all be attending.  We'll be covering a range of sessions and if you have any questions you'd like to ask about Oracle tools development, technical questions, migration, Forms to ADF, futures, mobile, anything!, then come up and say hi.

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • Customizing / overriding ComboBox (2 replies)

    Hi, I would override a Windows.Forms.Combobox to have a MyObjectCollection instead of a ObjectCollection as Items property. I've try writing this code, but it seems that items are stored somewhere else (not in Items property). I can add items to collection but when I try to select an item from the combobox (at runtime) an exception tells me that there is no such element in Items (litterally it tel...

    Read the article

  • Running Objects – Associations and Relationships

    - by edurdias
    After the introduction to the Running Objects with the tutorial Movie Database in 2 Minutes (available here), I would like to demonstrate how Running Objects interprets the Associations where we will cover: Direct Association – A reference to another complex object. Aggregation – A collection of another complex object. For those coming with a database perspective, by demonstrating these associations we will also exemplify the underline relationships such as 1 to Many and Many to Many relationships...(read more)

    Read the article

  • How do display a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • Business layer access to the data layer

    - by rerun
    I have a Business layer (BL) and a Data layer (DL). I have an object o with a child objects collection of Type C. I would like to provide a semantics like the following o.Children.Add("info"). In the BL I would like to have a static CLASS that all of business layer classes uses to get a reference to the current datalayer instance. Is there any issue with that or must I use the factory pattern to limit creation to A class in the BL that knows the DL instance.

    Read the article

  • NHibernate Pitfalls Index

    - by Ricardo Peres
    These are the posts on NHibernate pitfalls I’ve written so far. This post will be updated whenever there are more. The SaveOrUpdate Event Collection Restrictions Specifying Event Listeners in XML Configuration Many to Many and Inverse Bags and Join Lazy Properties in Non-Lazy Entities Adding to a Bag Causes Loading Flushing Changes Private Setter on Id Property

    Read the article

  • What Would You Select?

    Software development is a collection of trade offs; performance for speed to market, quick & dirty vs. maintainable, on and on. Most tend to sacrifice user experience at some level for time to market, other do not consider maintainability, reliability....(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to implement explosion in OpenGL?

    - by Chan
    I'm relatively new to OpenGL and I'm clueless how to implement explosion. So could anyone give me some ideas how to start? Suppose the explosion occurs at location $(x, y, z)$, then I'm thinking of randomly generate a collection of vectors with $(x, y, z)$ as origin, then draw some particle (glutSolidCube) which move along this vector for some period of time, says after 1000 updates, it disappear. Is this approach feasible? A minimal example would be greatly appreciated.

    Read the article

  • How do produce a "mucus spreading" effect in a 2D environment?

    - by nathan
    Here is an example of such a mucus spreading. The substance is spread around the source (in this example, the source would be the main alien building). The game is starcraft, the purple substance is called creep. How this kind of substance spreading would be achieved in a top down 2D environment? Recalculating the substance progression and regenerate the effect on the fly each frame or rather use a large collection of tiles or something else?

    Read the article

  • Game ideas for a platformer

    - by user5925
    I have created a platformer which currently has the features listed below. I would greatly appreciate any further ideas which I could implement! (I don't play a lot of games which is why I require help) -- Walking/jumping/movement -- player can shoot lasers -- enemies also walk, fly, and shoot lasers -- water (you can swim in this) -- mud (slows you down on contact, and stops you from jumping) -- ladders -- damage when falling from a large height, unless falling into water -- moving platforms -- springboards (jumping on them shoot you into the air) -- growing platforms (allow you to reach new places) -- key and door system -- gem and coin collection system

    Read the article

  • Find directories that DON'T contain a file but YES another one

    - by muixca
    I have quite a large music collection and would like to find the directories in which I still have compressed files (*.rar) unprocessed. Hence looking for a command that lists directories in which i do NOT have *.flac or *.mp3 but YES *.rar present. Working off found examples in this post: Find directories that DON'T contain a file I tried: comm -3 \ <(find ~/Music/ -iname "*.rar" -not -iname "*.flac" -not -iname "*.mp3" -printf '%h\n' | sort -u) \ <(find ~/Music/ -maxdepth 5 -mindepth 2 -type d | sort) \ | sed 's/^.*Music\///' but don' work.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >