Search Results

Search found 16050 results on 642 pages for 'linq to objects'.

Page 246/642 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Quick 2D sight area calculation algorithm?

    - by Rogach
    I have a matrix of tiles, on some of that tiles there are objects. I want to calculate which tiles are visible to player, and which are not, and I need to do it quite efficiently (so it would compute fast enough even when I have a big matrices (100x100) and lots of objects). I tried to do it with Besenham's algorithm, but it was slow. Also, it gave me some errors: ----XXX- ----X**- ----XXX- -@------ -@------ -@------ ----XXX- ----X**- ----XXX- (raw version) (Besenham) (correct, since tunnel walls are still visible at distance) (@ is the player, X is obstacle, * is invisible, - is visible) I'm sure this can be done - after all, we have NetHack, Zangband, and they all dealt with this problem somehow :) What algorithm can you recommend for this? EDIT: Definition of visible (in my opinion): tile is visible when at least a part (e.g. corner) of the tile can be connected to center of player tile with a straight line which does not intersect any of obstacles.

    Read the article

  • ASP.NET MVC Tabular Display Template

    The ASP.NET MVC2 templates feature is a pretty nice way to quickly scaffold objects at runtime. Be sure to read Brad Wilsons fantastic series on this topic starting at ASP.NET MVC 2 Templates, Part 1: Introduction. As great as this feature is, there is one template thats conspicuously missing. ASP.NET MVC does not include a template for displaying a list of objects in a tabular format. Earlier today, ScottGu forwarded an email from Daniel Manes (what?! no blog! ;) with a question on how to accomplish...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • What Functional features are worth a little OOP confusion for the benefits they bring?

    - by bonomo
    After learning functional programming in Haskell and F#, the OOP paradigm seems ass-backwards with classes, interfaces, objects. Which aspects of FP can I bring to work that my co-workers can understand? Are any FP styles worth talking to my boss about retraining my team so that we can use them? Possible aspects of FP: Immutability Partial Application and Currying First Class Functions (function pointers / Functional Objects / Strategy Pattern) Lazy Evaluation (and Monads) Pure Functions (no side effects) Expressions (vs. Statements - each line of code produces a value instead of, or in addition to causing side effects) Recursion Pattern Matching Is it a free-for-all where we can do whatever the programming language supports to the limit that language supports it? Or is there a better guideline?

    Read the article

  • Oracle Coherence & Oracle Service Bus: REST API Integration

    - by Nino Guarnacci
    This post aims to highlight one of the features found in Oracle Coherence which allows it to be easily added and integrated inside a wider variety of projects.  The features in question are the REST API exposed by the Coherence nodes, with which you can interact in the wider mode in memory data grid.Oracle Coherence and Oracle Service Bus are natively integrated through a feature found in the Oracle Service Bus, which allows you to use the coherence grid cache during the configuration phase of a business service. This feature allows you to use an intermediate layer of cache to retrieve the answers from previous invocations of the same service, without necessarily having to invoke the real business service again. Directly from the web console of Oracle Service Bus, you can decide the policies of eviction of the objects / answers and define the discriminating parameters that identify their uniqueness.The coherence REST APIs, however, allow you to integrate both products for other necessities enabling realization of new architectures design.  Consider coherence’s node as a simple service which interoperates through the stardard services and in particular REST (with JSON and XML). Thinking of coherence as a company’s shared service, able to have an implementation of a centralized “map and reduce” which you can access  by a huge variety of protocols (transport and envelopes).An amazing step forward for those who still imagine connectors and code. This type of integration does not require writing custom code or complex implementation to be self-supported. The added value is made unique by the incredible value of both products independently, and still more out of their simple and robust integration.As already mentioned this scenario discovers a hidden new door behind the columns of these two products. The door leads to new ideas and perspectives for enterprise architectures that increasingly wink to next-generation applications: simple and dynamic, perhaps towards the mobile and web 2.0.Below, a small and simple demo useful to demonstrate how easily is to integrate these two products using the Coherence REST API. This demo is also intended to imagine new enterprise architectures using this approach.The idea is to create a centralized system of alerting, fed easily from any company’s application, regardless of the technology with which they were built . Then use a representation standard protocol: RSS, using a service exposed by the service bus; So you can browse and search only the alerts that you are interested on, by category, author, title, date, etc etc.. The steps needed to implement this system are very simple and very few. Here they are listed below and described to be easily replicated within your environment. I would remind you that the demo is only meant to demonstrate how easily is to integrate Oracle Coherence and the Oracle Service Bus, and stimulate your imagination to new technological approaches.1) Install the two products: In this demo used (if necessary, consult the installation guides of 2 products)  - Oracle Service Bus ver. 11.1.1.5.0 http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html - Oracle Coherence ver. 3.7.1 http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html 2) Because you choose to create a centralized alerting system, we need to define a structure type containing some alerting attributes useful to preserve and organize the information of the various alerts sent by the different applications. Here, then it was built a java class named Alert containing the canonical properties of an alarm information:- Title- Description- System- Time- Severity 3) Therefore, we need to create two configuration files for the coherence node, in order to save the Alert objects within the grid, through the rest/http protocol (more than the native API for Java, C + +, C,. Net). Here are the two minimal configuration files for Coherence:coherence-rest-config.xml resty-server-config.xml This minimum configuration allows me to use a distributed cache named "alerts" that can  also be accessed via http - rest on the host "localhost" over port "8080", objects are of type “oracle.cohsb.Alert”. 4) Below  a simple Java class that represents the type of alert messages: 5) At this point we just need to startup our coherence node, able to listen on http protocol to manage the “alerts” cache, which will receive incoming XML or JSON objects of type Alert. Remember to include in the classpath of the coherence node, the Alert java class and the following coherence libraries and configuration files:  At this point, just run the coherence class node “com.tangosol.net.DefaultCacheServer”advising you to set the following parameters:-Dtangosol.coherence.log.level=9 -Dtangosol.coherence.log=stdout -Dtangosol.coherence.cacheconfig=[PATH_TO_THE_FILE]\resty-server-config.xml 6) Let's create a procedure to test our configuration of Coherence and in order to insert some custom alerts in our cache. The technology with which you want to achieve this functionality is fully not considerable: Javascript, Python, Ruby, Scala, C + +, Java.... Because the protocol to communicate with Coherence is simply HTTP / JSON or XML. For this little demo i choose Java: A method to send/put the alert to the cache: A method to query and view the content of the cache: Finally the main method that execute our methods:  No special library added in the classpath for our class (json struct static defined), when it will be executed, it asks some information such as title, description,... in order to compose and send an alert to the cache and then it will perform an inquiry, to the same cache. At this point, a good exercise at this point, may be to create the same procedure using other technologies, such as a simple html page containing some JavaScript code, and then using Python, Ruby, and so on.7) Now we are ready to start configuring the Oracle Service Bus in order to integrate the two products. First integrate the internal alerting system of Oracle Service Bus with our centralized alerting system based on coherence node. This ensures that by monitoring, or directly from within our Proxy Message Flow, we can throw alerts and save them directly into the Coherence node. To do this I choose to use the jms technology, natively present inside the Oracle Weblogic / Service Bus. Access to the Oracle WebLogic Administration console and create and configure a new JMS connection factory and a new jms destination (queue). Now we should create a new resource of type “alert destination” within our Oracle Service Bus project. The new “alert destination” resource should be configured using the newly created connection factory jms and jms destination. Finally, in order to withdraw the message alert enqueued in our JMS destination and send it to our coherence node, we just need to create a new business service and proxy service within our Oracle Service Bus project.Our business service is responsible for sending a message to our REST service Coherence using as a method action: PUT Finally our proxy service have to collect all messages enqueued on the destination, execute an xquery transformation on those messages  in order to translate them into valid XML / alert objects useful to be sent to our coherence service, through the newly created business service. The message flow pipeline containing the xquery transformation: Incredibly,  we just did a basic first integration between the native alerting system of Oracle Service Bus and our centralized alerting system by simply configuring our coherence node without developing anything.It's time to test it out. To do this I create a proxy service able to generate an alert using our "alert destination", whenever the proxy is invoked. After some invocation to our proxy that generates fake alerts, we could open an Internet browser and type the URL  http://localhost: 8080/alerts/  so we could see what has been inserted within the coherence node. 8) We are ready for the final step.  We would create a new message flow, that can be used to search and display the results in standard mode. To do this I choosen the standard representation of RSS, to display a formatted result on a huge variety of devices such as readers for the iPhone and Android. The inquiry may be defined already at the time of the request able to return only feed / items related to our needs. To do this we need to create a new business service, a new proxy service, and finally a new XQuery Transformation to take care of translating the collection of alerts that will be return from our coherence node in a nicely formatted RSS standard document.So we start right from this resource (xquery), which has the task of transforming a collection of alerts / xml returned from the node coherence in a type well-formatted feed RSS 2.0 our new business service that will search the alerts on our coherence node using the Rest API. And finally, our last resource, the proxy service that will be exposed as an RSS / feeds to various mobile devices and traditional web readers, in which we will intercept any search query, and transform the result returned by the business service in an RSS feed 2.0. The message flow with the transformation phase (Alert TO Feed Items): Finally some little tricks to follow during the routing to the business service, - check for any queries present in the url to require a subset of alerts  - the http header "Accept" to help get an answer XML instead of JSON: In our little demo we also static added some coherence parameters to the request:sort=time:desc;start=0;count=100I would like to get from Coherence that the results will be sorted by date, and starting from 1 up to a maximum of 100.Done!!Just incredible, our centralized alerting system is ready. Inheriting all the qualities and capabilities of the two products involved Oracle Coherence & Oracle Service Bus: - RASP (Reliability, Availability, Scalability, Performance)Now try to use your mobile device, or a normal Internet browser by accessing the RSS just published: Some urls you may test: Search for the last 100 alerts : http://localhost:7001/alarmsSearch for alerts that do not have time set to null (time is not null):http://localhost:7001/alarms?q=time+is+not+nullSearch for alerts that the system property is “Web Browser” (system = ‘Web Browser’):http://localhost:7001/alarms?q=system+%3D+%27Web+Browser%27Search for alerts that the system property is “Web Browser” and the severity property is “Fatal” and the title property contain the word “Javascript”  (system = ‘Web Broser’ and severity = ‘Fatal’ and title like ‘%Javascript%’)http://localhost:8080/alerts?q=system+%3D+%27Web+Browser%27+AND+severity+%3D+%27Fatal%27+AND+title+LIKE+%27%25Javascript%25%27 To compose more complex queries about your need I would suggest you to read the chapter in the coherence documentation inherent the Cohl language (Coherence Query Language) http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/api_cq.htm . Some useful links: - Oracle Coherence REST API Documentation http://download.oracle.com/docs/cd/E24290_01/coh.371/e22839/rest_intro.htm - Oracle Service Bus Documentation http://download.oracle.com/docs/cd/E21764_01/soa.htm#osb - REST explanation from Wikipedia http://en.wikipedia.org/wiki/Representational_state_transfer At this URL could be downloaded the whole materials of this demo http://blogs.oracle.com/slc/resource/cosb/coh-sb-demo.zip Author: Nino Guarnacci.

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • OpenGL ES 2.0: Using VBOs?

    - by Bunkai.Satori
    OpenGL VBOs (vertex buffer objects) have been developed to improve performance of OpenGL (OpenGL ES 2.0 in my case). The logic is that with the help of VBOs, the data does not need to be copied from client memory to graphics card on per frame basis. However, as I see it, the game scene changes continuously: position of objects change, their scaling and rotating change, they get animated, they explode, they get spawn or disappear. In such highly dynamic environment, such as computer game scene is, what is the point of using VBOs, if the VBOs would need to be constructed on per-frame basis anyway? Can you please help me to understand how to practically take beneif of VBOs in computer games? Can there be more vertex based VBOs (say one per one object) or there must be always exactly only one VBO present for each draw cycle?

    Read the article

  • Rotate to a set degree then reverse and repeat in Unity

    - by Ryan
    and thank you for your time. I'm making my first project in Unity, a simple game where touching objects adds points to the players score. I'd like the objects to have a pleasant back and forth swaying animation on the Z axis. Nodding to the right 30 degrees, then to the left 30 degrees, on and on. Here's what I've got... public class Rotator : MonoBehaviour { void Update () { transform.Rotate(new Vector3(0,0,12)*Time.deltaTime); } } This gives me a nice slow rotation. But I am clueless how to tell Unity to stop at +30 degrees, reverse to -30 degrees, rotate again to +30, stop and repeat, etc, etc. I'd really appreciate any help. Maybe there is a thread like this that I was not able to find? I assume it will involve some kind of 'if than' function? Thank you, Ryan

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

  • Silverlight Cream for February 22, 2011 -- #1050

    - by Dave Campbell
    In this Issue: Robby Ingebretsen, Victor Gaudioso, Andrea Boschin(-2-), Rudi Grobler(-2-), Michael Crump, Deborah Kurata, Dennis Delimarsky, Pete Vickers, Yochay Kiriaty, Peter Kuhn, WindowsPhoneGeek, and Jesse Liberty(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Printing" Deborah Kurata WP7: "Creating theme friendly UI in WP7 using OpacityMask" WindowsPhoneGeek Tools: "KAXAML v1.8" Robby Ingebretsen Shoutouts: Peter Foot posted Silverlight for Windows Phone Toolkit–Feb 2011 Rudi Grobler posts his top requested features for WP7, Silverlight, and WCF: vNext ... see you in Seattle, Rudi! From SilverlightCream.com: KAXAML v1.8 Robby Ingebretsen just posted KAXML v1.8 that now supports .NET 4.0, WPF, and Silferlight4 ... go grab it. Learn how to use Blend to create a Data Store, Add Properties to it, etc... Victor Gaudioso has 3 new Silverlight and/or Expression Blend video tutorials up... first is this one on Creating a Data store, adding properties to it, oh... read the title :), Next up is: Send async messages across UserControls or even applications, followed by the latest: Create a Sketchflow Animation using the Sketchflow Animation Panel A base class for threaded Application Services Andrea Boschin continues his IApplicationServices series with this one on a base class he created to develop Application Services that runs a thread. Windows Phone 7 - Part #6: Taking advantage of the phone Andrea Boschin also has part 6 of his series at SilverlightShow on WP7... this one is covering a bunch of items... Capabilities, Launchers/Choosers, and gestures... plus the source for a fun game. {homebrew} Skype for WP7 Rudi Grobler posted about the availability of (some features of) Skype for WP7 being available. The XDA guys have working contacts and the ability to chat going, plus they're looking for poeple to join in... Follow Rudi's link, and let them know you're up for it! Simple menu for your WP7 application Rudi Grobler has another post up about a very simple menu control for WP7 that he produced that is also very easy to use. Attaching a Command to the WP7 Application Bar Michael Crump shows how to bind the application bar to a Relay Command with the use of MVVMLight in 7 Easy Steps :) Silverlight Simple MVVM Printing Deborah Kurata continues her MVVM series with this one on printing what your user sees on the page... but doing so within the MVVM pattern. Enhancing the general Zune experience on Windows Phone 7 with Zune web API Dennis Delimarsky apparently likes the Zune as much as I do, and has ratted out tons of information about the Zune API for use in WP7 apps... and lots of code... Validating input forms in Windows Phone 7 Pete Vickers takes a great detailed spin through validation on the WP7... the rules have changed, but Pete explains with some code examples. Windows Phone Shake Gestures Library Yochay Kiriaty discusses Shake Gestures for the WP7 device and then describes the "Windows Phone Shake Gesture Library" that detects shake gestures in 3D space... and after a great description has the link for downloading. What difference does a sprite sheet make? Peter Kuhn is writing a series at SilverlightShow on XNA for Silverlight Devs that I've highlighted. An outshoot of that is this discussion of the use of sprite sheets and game development. Creating theme friendly UI in WP7 using OpacityMask WindowsPhoneGeek has a new post up today on using Opacity Masks in WP7 to enable using one set of icons for either the dark or light theme.. too cool, you'll wanna check this out! Linq to XML Jesse Liberty continues with Linq with regard to WP7 with this post on Linq to XML... and why XML? crap... I was just saving/loading XML today! :) Lambda–Not as weird as it sounds Jesse Liberty then jumps into Lambda expressions... maybe it's a chance for me to learn WTF the lambdas really do that I use all the time! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for February 10, 2011 -- #1045

    - by Dave Campbell
    In this Issue: Mark Monster, Jaime Rodriguez, Mark Hopkins, WindowsPhoneGeek, David Anson, Jesse Liberty, Jeremy Likness, Martin Krüger(-2-), Beth Massi, Joost van Schaik, Laurent Bugnion, and Arik Poznanski. Above the Fold: Silverlight: "Parsing the Visual Tree with LINQ" Jeremy Likness WP7: "Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively" David Anson Lightswitch: "How to Send Automated Appointments from a LightSwitch Application" Beth Massi Shoutouts: Be sure to visit SilverlightShow... check out their top hits last week: SilverlightShow for Jan 31- Feb 06, 2011 Jaime Rodriguez has a post up that all the WP7 folks will be interested in: FAQ about copy paste functionality in upcoming release From SilverlightCream.com: Make use of WCF FaultContracts in Silverlight clients Mark Monster takes a shot at answering “The remote server returned an error: NotFound” while connecting to a WCF Service problem we all see. Communication between HTML in WebBrowser and Silverlight app Jaime Rodriguez responds to questions he received about communication between HTML and SIlverlight with this post about the bi-directional communication between the control and HTML. WP7 - Real Apps, Real Code Mark Hopkins has a post up about some WP7 starter kits that you can get all the source for and actually download the app from the Marketplace first to see if it interests you! WP7 AboutPrompt in depth WindowsPhoneGeek has this cool post up about the AboutPrompt from the Coding4Fun toolkit in detail... great diagrams showing where all the elements are and code examples with images. Silverlight-ready PNG encoder implementation shows one way to use .NET IEnumerables effectively David Anson describes why he took it upon himself to write his own png encoder for Silverlight... and we all thank him for doing so and providing us with the code! Navigation 101–Cancelling Navigation Jesse Liberty's latest WP7 From Scratch episode is up (number 32), and he's talking about Navigation and how to cancel it if you need to. Parsing the Visual Tree with LINQ Jeremy Likness demonstrates using LINQ to rat out information in the visual tree of your XAML. To Quote Jeremy: "you can easily check for intersections between elements and find any type of element no matter how deep within the tree it is". SpriteAnimationBehavior Martin Krüger has a couple more fun things in the Expression Gallery that I haven't discussed. First up is a behavior that animates up to 999 images and lets you control the FramesPerSecond... great demo on the ExpressionGallery to play with. Second alternative: Storyboard should not start before the Silverlight application is loaded Martin Krüger's latest is a way to programmatically wait for the Loaded event so that you know you can let your animations fly. How to Send Automated Appointments from a LightSwitch Application Beth Massi's latest Lightswitch post follows up her Outlook automation one with sending appointments using the standard iCalendar format... all the code included of course. The case for the Bindable Application Bar for Windows Phone 7 Joost van Schaik posts about a bindable Application Bar for your WP7 apps... grab the code and don't leave home without it :) MVVM Light V4 preview (BL0014) release notes Laurent Bugnion posted an update to MVVMLight to Codeplex a couple days ago. This is an early preview of what he plans on having in version 4, so check out the post for what's new and fun. Search Digg on Windows Phone 7 Arik Poznanski followed up his RSS post from last week with this one on searching Digg on WP7... and he's discussing and providing a utility class for doing it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • How to implement Undo and Redo feature in as3

    - by Swati Singh
    I am going to create an application in that i have to implement an Undo and Redo feature. In the application there will be multiple objects located on stage and user can customize the position of the objects. But when user clicks on Undo the object go back to their default position and after clicking on redo object will move on the new position. So my question is how can i apply these feature in my application? Is there any library or any third party classes? Can some one help me? Thanks in advance.

    Read the article

  • Item 2, Scott Myers Effective C++ question

    - by user619818
    In Item2 on page 16, (Prefer consts, enums, and inlines to #defines), Scott says: 'Also, though good compilers won't set aside storage for const objects of integer types'. I don't understand this. If I define a const object, eg const int myval = 5; then surely the compiler must set aside some memory (of int size) to store the value 5? Or is const data stored in some special way? This is more a question of computer storage I suppose. Basically, how does the computer store const objects so that no storage is set aside?

    Read the article

  • Liskov Substitution Principle and the Oft Forgot Third Wheel

    - by Stacy Vicknair
    Liskov Substitution Principle (LSP) is a principle of object oriented programming that many might be familiar with from the SOLID principles mnemonic from Uncle Bob Martin. The principle highlights the relationship between a type and its subtypes, and, according to Wikipedia, is defined by Barbara Liskov and Jeanette Wing as the following principle:   Let be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .   Rectangles gonna rectangulate The iconic example of this principle is illustrated with the relationship between a rectangle and a square. Let’s say we have a class named Rectangle that had a property to set width and a property to set its height. 1: Public Class Rectangle 2: Overridable Property Width As Integer 3: Overridable Property Height As Integer 4: End Class   We all at some point here that inheritance mocks an “IS A” relationship, and by gosh we all know square IS A rectangle. So let’s make a square class that inherits from rectangle. However, squares do maintain the same length on every side, so let’s override and add that behavior. 1: Public Class Square 2: Inherits Rectangle 3:  4: Private _sideLength As Integer 5:  6: Public Overrides Property Width As Integer 7: Get 8: Return _sideLength 9: End Get 10: Set(value As Integer) 11: _sideLength = value 12: End Set 13: End Property 14:  15: Public Overrides Property Height As Integer 16: Get 17: Return _sideLength 18: End Get 19: Set(value As Integer) 20: _sideLength = value 21: End Set 22: End Property 23: End Class   Now, say we had the following test: 1: Public Sub SetHeight_DoesNotAffectWidth(rectangle As Rectangle) 2: 'arrange 3: Dim expectedWidth = 4 4: rectangle.Width = 4 5:  6: 'act 7: rectangle.Height = 7 8:  9: 'assert 10: Assert.AreEqual(expectedWidth, rectangle.Width) 11: End Sub   If we pass in a rectangle, this test passes just fine. What if we pass in a square?   This is where we see the violation of Liskov’s Principle! A square might "IS A” to a rectangle, but we have differing expectations on how a rectangle should function than how a square should! Great expectations Here’s where we pat ourselves on the back and take a victory lap around the office and tell everyone about how we understand LSP like a boss. And all is good… until we start trying to apply it to our work. If I can’t even change functionality on a simple setter without breaking the expectations on a parent class, what can I do with subtyping? Did Liskov just tell me to never touch subtyping again? The short answer: NO, SHE DIDN’T. When I first learned LSP, and from those I’ve talked with as well, I overlooked a very important but not appropriately stressed quality of the principle: our expectations. Our inclination is to want a logical catch-all, where we can easily apply this principle and wipe our hands, drop the mic and exit stage left. That’s not the case because in every different programming scenario, our expectations of the parent class or type will be different. We have to set reasonable expectations on the behaviors that we expect out of the parent, then make sure that those expectations are met by the child. Any expectations not explicitly expected of the parent aren’t expected of the child either, and don’t register as a violation of LSP that prevents implementation. You can see the flexibility mentioned in the Wikipedia article itself: A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e., keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. Violations of LSP, like this one, may or may not be a problem in practice, depending on the postconditions or invariants that are actually expected by the code that uses classes violating LSP. Mutability is a key issue here. If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur. What this means is that the above situation with a rectangle and a square can be acceptable if we do not have the expectation for width to leave height unaffected, or vice-versa, in our application. Conclusion – the oft forgot third wheel Liskov Substitution Principle is meant to act as a guidance and warn us against unexpected behaviors. Objects can be stateful and as a result we can end up with unexpected situations if we don’t code carefully. Specifically when subclassing, make sure that the subclass meets the expectations held to its parent. Don’t let LSP think you cannot deviate from the behaviors of the parent, but understand that LSP is meant to highlight the importance of not only the parent and the child class, but also of the expectations WE set for the parent class and the necessity of meeting those expectations in order to help prevent sticky situations.   Code examples, in both VB and C# Technorati Tags: LSV,Liskov Substitution Principle,Uncle Bob,Robert Martin,Barbara Liskov,Liskov

    Read the article

  • openGL migration from SFML to glut, vertices arrays or display lists are not displayed

    - by user3714670
    Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window. With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it). Here is the code to initialise the window : int type; int stereoMode = 0; if ( stereoMode == 0 ) type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH; else type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO; glutInitDisplayMode(type); int argc = 0; char *argv = ""; glewExperimental = GL_TRUE; glutInit(&argc, &argv); bool fullscreen = false; glutInitWindowSize(width,height); int win = glutCreateWindow(title.c_str()); glutSetWindow(win); assert(win != 0); if ( fullscreen ) { glutFullScreen(); width = glutGet(GLUT_SCREEN_WIDTH); height = glutGet(GLUT_SCREEN_HEIGHT); } GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); } glutDisplayFunc(loop_function); This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(255.0f, 255.0f, 255.0f, 0.0f); glLoadIdentity(); sf::Time elapsed_time = clock.getElapsedTime(); clock.restart(); camera->animate(elapsed_time.asMilliseconds()); camera->look(); for (auto i = objects->cbegin(); i != objects->cend(); ++i) (*i)->draw(camera); glutSwapBuffers(); Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject. In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related. EDIT : Here is the code i use to draw each object, maybe it is the problem : GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular"); if(LightID ==-1) cout << "LightID not found ..." << endl; glm::vec3 lightPos = glm::vec3(0,150,150); glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z); glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z); glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z); // Get a handle for our "myTextureSampler" uniform GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; glActiveTexture(GL_TEXTURE0); sf::Texture::bind(texture); glUniform1i(TextureID, 0); // 2nd attribute buffer : UVs GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color"); if(vertexUVID==-1) cout << "vertexUVID not found ..." << endl; glEnableVertexAttribArray(vertexUVID); glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer); glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0); GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal"); if(!vertexNormal_modelspaceID) cout << "vertexNormal_modelspaceID not found ..." << endl; glEnableVertexAttribArray(vertexNormal_modelspaceID); glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer); glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 ); GLint posAttrib; posAttrib = glGetAttribLocation(shaderProgram, "position"); if(!posAttrib) cout << "posAttrib not found ..." << endl; glEnableVertexAttribArray(posAttrib); glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer); glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0); GLuint error; while ((error = glGetError()) != GL_NO_ERROR) { cerr << "OpenGL error: " << error << endl; } disableShaders();

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Dynamic Memory Allocation and Memory Management

    - by Bunkai.Satori
    In an average game, there are hundreds or maybe thousands of objects in the scene. Is it completely correct to allocate memory for all objects, including gun shots (bullets), dynamically via default new()? Should I create any memory pool for dynamic allocation, or is there no need to bother with this? What if the target platform are mobile devices? Is there a need for a memory manager in a mobile game, please? Thank you. Language Used: C++; Currently developed under Windows, but planned to be ported later.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >