Search Results

Search found 4567 results on 183 pages for 'gae models'.

Page 75/183 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Google Cloud Messaging (GCM) for turn-based mobile multiplayer server?

    - by Chris
    I'm designing a multiplayer turn-based game for Android (over 3g). I'm thinking the clients will send data to a central server over a socket or http, and receive data via GCM push messaging. I'd like to know if anyone has practical experience with GCM for pushing 'real-time' turn data to game clients. What kind of performance and limitations does it have? I'm also considering using a RESTful approach with GAE or Amazon EC2. Any advice about these approaches is appreciated.

    Read the article

  • What is the relationship between the business logic layer and the data access layer?

    - by Matt Fenwick
    I'm working on an MVC-ish app (I'm not very experienced with MVC, hence the "-ish"). My model and data access layer are hard to test because they're very tightly coupled, so I'm trying to uncouple them. What is the nature of the relationship between them? Should just the model know about the DAL? Should just the DAL know about the model? Or should both the model and the DAL be listeners of the other? In my specific case, it's: a web application the model is client-side (javascript) the data is accessed from the back-end using Ajax persistence/back-end is currently PHP/MySQL, but may have to switch to Python/GoogleDataStore on the GAE

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • ArchBeat Link-o-Rama for December 7, 2012

    - by Bob Rhubart
    From XaaS to Java EE – Which damn cloud is right for me in 2012? | Markus Eisele Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. WebLogic Servier Domain Browser App (Android) My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word.The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox | The Old Toxophilist "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. BTW: I had to look it up: a toxophilist is one who collects bows and arrows. Thought for the Day "All models are wrong; some models are useful." — George Box Source: SoftwareQuotes.com

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Is there a framework for describing object oriented communication standards/protocols?

    - by martin
    Currently I'm dealing with the development of specifications for communication standards/protocols for b2b-integration based on object oriented models. I.e. if you take a look at the healthcare domain there is HL7v3 with its HDF. Now I ask if there is a more generic framework, that describes how a specification for a communication standard should be developed. For b2b-integration I want to describe a communication standard based on uml models for a broad domain. My thought was to divide the domain into subdomains and derive message type from the resulting model. There is already a given framework, but I want to compare it to another framework. My idea is to compare them using a generic framework. It should describe several levels. Does anybody know such a framework? I have searched a while on google scholar, but haven't an appropiate framework yet. The only thing I have found is ebXML, but I think it is not exactly what I need.

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • How to access dev server in Ubuntu VirtualBox guest on Windows 7 host?

    - by Curyous
    I'm running a Google App Engine dev server on Ubuntu 11.10 Desktop in a VirtualBox VM, on a Windows 7 host. According to this question I have the following setup: The VM networking is set to use Host-only network adapter. Internet connection sharing (ICS) is enabled in Windows. For ICS, the Windows VirtualBox network port and the Ubuntu wired connection have fixed IPs. The Ubuntu VM can access the internet. I can ping the guest from the host. On the host, if I put the guest IP address in Chrome's address bar, it says it can not connect. What do I need to do from here to access the GAE dev server that is running on Localhost:8080?

    Read the article

  • How do I draw a 2d plane and rotate camara (To be a board) in a 3d XNA game?

    - by Mech0z
    I am trying to create a simple board game, but the 3d part of this is really killing me. From what I can gather I have created a plane, but it never moves even though I turn the camara, but that partially makes sense as I only turn the camara with a 3d model, but in my head that makes 0 sense, in my head if I turn the camara it should affect ALL my models? But with this code the camara only "cares" about the 3d cylinder, the plane is just completely still private void OnDraw(object sender, GameTimerEventArgs e) { SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); foreach (ModelMesh mesh in cylinderModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { //effect.World = Matrix.CreateRotationX((float)e.TotalTime.TotalSeconds * 2); effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.EnableDefaultLighting(); } mesh.Draw(); } //cameraPosition.Z -= 5.0f; _effect.World = Matrix.CreateRotationZ((MathHelper.ToRadians(((float)e.TotalTime.Milliseconds / 2) % 360))); foreach (EffectPass pass in _effect.CurrentTechnique.Passes) { pass.Apply(); SharedGraphicsDeviceManager.Current.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 1, VertexPositionColor.VertexDeclaration); } } Is there a way to get the camara to affect all models?

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • The Evolution of Computer Keyboards

    - by Jason Fitzpatrick
    While the basic shape of keyboards has remained largely unchanged over the last thirty years, the guts have undergone several transformations. Read on to explore the history of the computer keyboard. ComputerWorld delves into the history of the modern keyboard, including the heavy influence IBM’s extensive keyboard research on early keyboards: As far as direct influences on the modern computer keyboard, IBM’s Selectric typewriter was one of the biggest. IBM released the first model of its iconic electromechanical typewriter in 1961, a time when being able to type fast and accurately was a highly sought-after skill. Dag Spicer, senior curator at the Computer History Museum, notes that as the Selectric models rose to prominence, admins grew to love the feel of the keyboard because of IBM’s dogged focus on making the ergonomics comfortable. “IBM’s probably done more than anyone to find [keyboard] ergonomics that work for everyone,” Spicer says. So when the PC hit the scene a decade or two later, the Selectric was largely viewed as the baseline to design keyboards for those newfangled computers you could put in your office or home. Hit up the link below to continue reading about how the Selectric influenced keyboards throughout the 1980s and what replaced the crisp clacking of early IBM-styled models. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • JavaScript and callback nesting

    - by Jake King
    A lot of JavaScript libraries (notably jQuery) use chaining, which allows the reduction of this: var foo = $(".foo"); foo.stop(); foo.show(); foo.animate({ top: 0 }); to this: $(".foo").stop().show().animate({ top: 0 }); With proper formatting, I think this is quite a nice syntactic capability. However, I often see a pattern which I don't particularly like, but appears to be a necessary evil in non-blocking models. This is the ever-present nesting of callback functions: $(".foo").animate({ top: 0, }, { callback: function () { $.ajax({ url: 'ajax.php', }, { callback: function () { ... } }); } }); And it never ends. Even though I love the ease non-blocking models provide, I hate the odd nesting of function literals it forces upon the programmer. I'm interesting in writing a small JS library as an exercise, and I'd love to find a better way to do this, but I don't know how it could be done without feeling hacky. Are there any projects out there that have resolved this problem before? And if not, what are the alternatives to this ugly, meaningless code structure?

    Read the article

  • Do the benefits of Resin/Quercus outweigh the overhead?

    - by Craige
    Lately, I've been looking more and more into Resin + Quercus as a technology to develop an application of mine. The reason I started looking into it was that this application has high reporting needs, a lot of which cannot (or realistically, should not) be created in real-time. Java would offer a nice backend to queue and generate reports. Also, with Quercus I would be able to develop my data models in Hibernate, and use them "from PHP", thus effectively stretching these models across front and back-end. This same concept would also apply to any front/back-end common business logic, which could be developed in Java libraries. Now, the downside is that whichever front-end (PHP) MVC Framework I choose (my goal was Symfony 2), it is unlikely to work without some heavy modification, if it can work at all. Quercus is a pretty close implementation of PHP, and is supposed to be compatible with PHP5.3, so namespaces and closures SHOULDN'T be a problem, but when I tried to run an existing Symfony 1.4 app, I failed miserably. So, my question to you is, do you think the benefits of Resin + Quercus outweigh the overhead of using a not-so-perfect/stable implementation of PHP? If this were your application, and your goal was and end-product, rather than educational purposes, what would you decide?

    Read the article

  • What gaming keyboard(s) will work with Ubuntu?

    - by belacqua
    I've been looking at gaming keyboards to use on Ubuntu system. Microsoft has a few popular ones (e.g., Sidewinder X4, X6), but the programmable function keys appear to be unusable without the Windows software. (Though here's a post from someone who has a more recent project that uses usbmon and xdotool to add functions to some keys.) Another choice in my budget is the Cyborg V.05. It seems about right for my needs, but I would be depressed having a bunch of useless, nonprogrammable keys on it. Logitech has some models (e.g., the Logitech G110), though again I expect that the extensive macro capabilities (which I don't need) would be lost under Linux. There's a project called g15tools which has some code to work with older Logitech gaming models, but I don't know what the current status is. Last entry there was in March 2010. There are also a number of very old posts around the internet with regard to the Logitech G11 and G15. Compatibility with the current keyboards, Ubuntu version, and Linux kernel are suspect. I'm in the U.S., and so it appears that few of the Roccat keyboards are available, and they're over-priced. Support might be OK for these, though -- there's a short Phoronix article about Roccat improving their Linux support, and there's also a project and webpage for "Using Roccat Hardware with Linux". Honestly, the only feature I have to have is good backlighting for the keys, and if it's not wired (which is fine), the wireless capability should function. I could probably live with dead function keys, as long as they weren't in places that would interfere with things like Unity/compiz shortcuts. Any experience or suggestions? I've not seen much to inspire confidence with programmable/macro keys. There is a thread (with no solutions) on the Sidewinder X4 on ubuntuforums here. I'm also considering the Logitech Illuminated Keyboard as a possibility, even though it's not specifically a gaming keyboard. It is backlit, and it's supposed to be a nice keyboard.

    Read the article

  • MVC and delegation

    - by timjver
    I am a beginning iOS programmer and use the Model-View-Controller model as a design pattern: my model doesn't know anything about my view (in order to make it compatible with any view), my view doesn't know anything about my model so they interact via my controller. A very usual way for a view to interact with the controller is through delegation: when the user interacts with the app, my view will notify my controller, which can call some methods of my model and update my view, if necessary. However, would it make sense to also make my controller the delegate of my model? I'm not convinced this is the way to go. It could be handy for my model to notify my controller of some process being finished, for example, or to ask for extra input of the user if it doesn't have enough information to complete the task. The downside of this, though, is that my controller would be the delegate for both my controller and my model, so there wouldn't be really a proper way to notify my model of changes in my view, and vice versa. (correct me if I'm wrong.) Conclusion: I don't really think it's a good idea to to have my controller to be the delegate of my model, but just being the delegate of my view would be fine. Is this the way most MVC models handle? Or is there a way to have the controller be the delegate of both the controller and the model, with proper communication between them? Like I said, I'm a beginner, so I want to do such stuff the right way immediately, rather than spending loads of hours on models that won't work anyway. :)

    Read the article

  • What technology(s)would be suitable for the front end part of a Java web game?

    - by James.Elsey
    As asked in a previous question, I'm looking to create a small MMO that will be deployed onto GAE. I'm confused about what technologies I could use for the user interface, I've considered the following JSP Pages - I've got experience with JSP/JSTL and I would find this easy to work with, it would require the user having to "submit" the page each time they perform an action so may become a little clumsey for players. Applet - I could create an applet that sits on the front end and communicates to the back end game engine, however I'm not sure how good this method would be and have not used applets since university.. What other options do I have? I don't have any experience in Flash/Flex so there would be a big learning curve there. Are there any other Java based options I may be able to use? My game will be text based, I may use some images, but I'm not intending to have any animations/graphics etc Thanks

    Read the article

  • MVVM and service pattern

    - by alfa-alfa
    I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the "right" way to implement this kind of architecture ?

    Read the article

  • model association or controller?

    - by andybritton
    I'm trying to create a rails app that allows users to submit information about their pets. I've come to a point where my knowledge is limited and I don't know enough about what/how this could be done so I'm hoping this will be relatively easy to answer. At the moment I have a model called Pet, this model currently stores basic information like name, picture etc but it also holds more specific data like type, breed, date of birth etc. What I would like to be able to do is create a page that can match various records without having to be manually categorized if that makes sense so a users pet could be matched to other pets with the same breed, age etc. I've read about nested models as I understand this information could be submitted to 2 models in one form but I am not sure whether this could be done directly in a separate controller which would only be visible to users with pets in these matched "groups" if that makes sense. So in essence is it best practice to use 1 table to store all the information and just use a controller to match pets based on rows having the same values or would it be far simpler to have a form with a nested model and link 2 tables together? The main feature needs to be matching without a user having to create a group or categorize pets so the second model would need to add id's to an array instead of just creating more and more rows.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • how to dispose a incoming email and then send some words back using googe-app-engine..

    - by zjm1126
    from google.appengine.api import mail i read the doc: mail.send_mail(sender="[email protected]", to="Albert Johnson <[email protected]>", subject="Your account has been approved", body=""" Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """) and i know hwo to send a email using gae ,but how to check a email incoming, and then do something thanks

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >