Search Results

Search found 829 results on 34 pages for 'scalable'.

Page 4/34 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • I'm creating my own scalable, rapid prototyping web server. How should I design it?

    - by Mike Willliams
    I'm going to create my own web server that focuses on scalability, rapid prototyping and the use of JavaScript as the server's scripting language, much like node.js. It will use a Model-View-Controller design pattern so a web application can support more concurrent users just by adding hardware -- and not having to redesign the software. Basically, I'm aiming to produce a framework that allows for fast and easy development of cloud applications without the need to write lots of boiler plate code. I've got some questions about this... How hard will it be to put MySQL in the cloud? How could I go about implementing this and make the resulting product free? Will I have to write my own engine or modify an existing one, if I do what should I watch out for? To make this scalable I need to adjust from one server to hundreds of servers this creates the requirement for the servers to be load balancing, how should I do this? If I balance based on the work load per server I would need gateway to handle all the incoming requests. Is it the right idea to have all the servers check into the gateway and update there status. By having the servers run through a gateway if the gateway dies all the incoming requests are ignored. I'm thinking that having all the servers maintain a list of each other, or at least a few I could rebuild the list of servers and establish a new gateway. Is it worth it? Or should I have a backup gateway that could switch out? Should I let the user choose? How should I pick which server handles the database and which handles the page serving? Should I spread the database so that queries are preformed on multiple servers? Which would theoretically improve performance. The servers would need to mirror the database at least once so that if a server goes down the database isn't corrupted. So this brings up writing another question, should I broadcast SQL queries so that all the servers can take a bit of the work load? If I do it that way wouldn't a query clog up the network so that other queries couldn't be preformed? What are my alternatives? Finally, is there a free solution already out there that might need a little modification that suits my needs?

    Read the article

  • How can I replace email alerts for system events with something more scalable?

    - by Dave Forgac
    I have a number of systems and services that send email alerts when some sort of event takes place. This works fine for a small number of systems but as the number of alerts grows the important message become less visible among the informational notices. Email filtering can only be effective to a point. What sort of solution can I use in place of emails that will allow me to send arbitrary alerts from various services and that will scale easily as the number of services grows?

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Spring: Using "Lookup method injection" for my ThreadFactory looks not scalable.

    - by Michael Bavin
    Hi, We're building a ThreadFactory so everytime a singleton controller needs a new thread, i get a new instance everytime. Looking at Lookup method injection looks good but what if we have multiple threads. like: public abstract class ThreadManager { public abstract Thread createThreadDoA(); public abstract Thread createThreadDoB(); } and config: <bean id="threadManager" class="bla.ThreadManager" singleton="true"> <lookup-method name="createThreadA" bean="threadA" /> <lookup-method name="createThreadB" bean="threadB"/> </bean> <bean id="threadA" class="bla.ThreadA"> <bean id="threadB" class="bla.ThreadB"> and usage: threadManager.createThreadA(); I don't want to create an abstract "create" method for every new threadclass. Is it possible to make this generich like: threadManager.createThread(ThreadA.class); Thank you

    Read the article

  • Any good tutorials or resources for learning how to design a scalable and "component" based game 'fr

    - by CodeJustin.com
    In short I'm creating a 2D mmorpg and unlike my last "mmo" I started developing I want to make sure that this one will scale well and work well when I want to add new in-game features or modify existing ones. With my last attempt with an avatar chat within the first few thousand lines of code and just getting basic features added into the game I seen my code quality lowering and my ability to add new features or modify old ones was getting lower too as I added more features in. It turned into one big mess that some how ran, lol. This time I really need to buckle down and find a design that will allow me to create a game framework that will be easy to add and remove features (aka things like playing mini-games within my world or a mail system or buddy list or a new public area with interactive items). I'm thinking that maybe a component based approach MIGHT be what I'm looking for but I'm really not sure. I have read documents on mmorpg design and 2d game engine architecture but nothing really explained a way of designing a game framework that will basically let me "plug-in" new features into the main game and use the resources of the main game without changing much within my 'main game code'. Hope someone understands what I mean, any help will is appreciated.

    Read the article

  • Is there any way that an export-to-Excel function can be scalable?

    - by MusiGenesis
    Summary: ASP.Net website with a couple hundred users. Data is exported to Excel files which can be relatively large (~5 MB). In the pilot phase (just a few users), we are already seeing occasional errors on the server in the exporting method. Here's the stack trace: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.TrackingMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.WriteAndCollapseBlocks(Byte[ ] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressEmulationStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ProgressiveCrcCalculatingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ZipIOModeEnforcingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder) at System.IO.StreamWriter.Write(String value) at System.Xml.XmlTextEncoder.Write(String text) at System.Xml.XmlTextWriter.WriteString(String text) at System.Xml.XmlText.WriteTo(XmlWriter w) at System.Xml.XmlAttribute.WriteContentTo(XmlWriter w) at System.Xml.XmlAttribute.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlDocument.WriteContentTo(XmlWriter xw) at System.Xml.XmlDocument.WriteTo(XmlWriter w) at System.Xml.XmlDocument.Save(Stream outStream) at OfficeOpenXml.ExcelWorksheet.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorksheet.cs:line 605 at OfficeOpenXml.ExcelWorkbook.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorkbook.cs:line 439 at OfficeOpenXml.ExcelPackage.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelPackage.cs:line 348 at Framework.Exporting.Business.ExcelExport.BuildReport(HttpContext context) at WebUserControl.BtnXLS_Click(Object sender, EventArgs e) in C:\TEMP\XXXXXXXXXX\XXXXXXXXXX\OneList\UserControls\TicketReportExporter. ascx.cs:line 108 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.Rai sePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.XXXXXXXXXXX_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\XXXX\cdf32a52\d1a5eabd\App_Web_enxdwlks.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpAppli cation.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Even aside from this particular problem, in general exporting to Excel requires the instantiation of huge Excel objects on the server for each request, which I've always assumed to mean disqualifies Excel for "serious" work on a highly-loaded server. Is there any general way to export to Excel in a "light-weight" manner? Would simply streaming the data into a CSV file work for this?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • What's the most scalable way to handle somewhat large file uploads in a Python webapp?

    - by Jason Baker
    We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy). I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are: Using a python asynchronous server like tornado or diesel or gunicorn. Writing something in twisted to handle it. Just using nginx to handle the actual file uploads. It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this?

    Read the article

  • Is nginx / node.js / postgres a very scalable architecture?

    - by Luc
    I have an app running with: one instance of nginx as the frontend (serving static file) a cluster of node.js application for the backend (using cluster and expressjs modules) one instance of Postgres as the DB Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for: 500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch). 20000 users connected at the same time Where could be the bottlenecks ?

    Read the article

  • custom icon in "Places" menu

    - by Gurnarok
    Hi, I'm pretty new to Ubuntu, but I know the basics. I read the "How do I change the folder icons in the “Places” menu?" but it didn't help, did the option #2 since I want to specify it for a single directory (for now). So what should I do? I have the folder image as SVG in: /usr/share/icons/hicolor/scalable/apps/ /usr/share/icons/hicolor/scalable/places/ The image I want to use: and where I want it:

    Read the article

  • How to create a PPA for C++ program?

    - by piotr
    My questions are: c++/gtkmm project created with NetBeans. How to make package to PPA from this? I have created target files structure (*.desktop, iconfile, ui glade files). Binary goes to /opt/extras.ubuntu.com/myagenda/bin/myagenda. There is also a folder of glade files, that must go to /opt/extras.ubuntu.com/myagenda/bin/myagenda/ui. Desktop file goes to /usr/share/applications/myagenda.desktop. Icon goes to /usr/share/icons/hicolor/scalable/apps/myagenda.svg As you see, there is really small amount of files. Now, how to manage all this stuff, to create package on PPA, which knows where and how put this files to their targets? +-- opt ¦   +-- extras.ubuntu.com ¦   +-- myagenda ¦   +-- bin ¦   ¦   +-- myagenda ¦   +-- ui ¦   +-- item_btn_delete.png ¦   +-- item_btn_edit.png ¦   +-- myagenda.png ¦   +-- myagenda.svg ¦   +-- reminder.png ¦   +-- ui.glade +-- usr +-- share +-- applications ¦   +-- myagenda.desktop +-- icons +-- hicolor +-- scalable +-- apps +-- myagenda.svg Update: Created install file in debian directory with targets: data/myagenda /opt/extras.ubuntu/com/myagenda/bin data/ui/* /opt/extras.ubuntu/com/myagenda/ui data/myagenda.desktop /usr/share/applications data/myagenda.svg /usr/share/icons/hicolor/scalable/apps After dpkg-buildpackage it builds, but for amd64 architecture. Now, trying to change that to i386.

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • Microsoft TypeScript : A Typed Superset of JavaScript

    - by shiju
    JavaScript is gradually becoming a ubiquitous programming language for the web, and the popularity of JavaScript is increasing day by day. Earlier, JavaScript was just a language for browser. But now, we can write JavaScript apps for browser, server and mobile. With the advent of Node.js, you can build scalable, high performance apps on the server with JavaScript. But many developers, especially developers who are working with static type languages, are hating the JavaScript language due to the lack of structuring and the maintainability problems of JavaScript. Microsoft TypeScript is trying to solve some problems of JavaScript when we are building scalable JavaScript apps. Microsoft TypeScript TypeScript is Microsoft's solution for writing scalable JavaScript programs with the help of Static Types, Interfaces, Modules and Classes along with greater tooling support. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. This would be more productive for developers who are coming from static type languages. You can write scalable JavaScript  apps in TypeScript with more productive and more maintainable manner, and later you can compiles to plain JavaScript which will be run on any browser and any OS. TypeScript will work with browser based JavaScript apps and JavaScript apps that following CommonJS specification. You can use TypeScript for building HTML 5 apps, Node.JS apps, WinRT apps. TypeScript is providing better tooling support with Visual Studio, Sublime Text, Vi, Emacs. Microsoft has open sourced its TypeScript languages on CodePlex at http://typescript.codeplex.com/    Install TypeScript You can install TypeScript compiler as a Node.js package via the NPM or you can install as a Visual Studio 2012 plug-in which will enable you better tooling support within the Visual Studio IDE. Since TypeScript is distributed as a Node.JS package, and it can be installed on other OS such as Linux and MacOS. The following command will install TypeScript compiler via an npm package for node.js npm install –g typescript TypeScript provides a Visual Studio 2012 plug-in as MSI file which will install TypeScript and also provides great tooling support within the Visual Studio, that lets the developers to write TypeScript apps with greater productivity and better maintainability. You can download the Visual Studio plug-in from here Building JavaScript  apps with TypeScript You can write typed version of JavaScript programs with TypeScript and then compiles it to plain JavaScript code. The beauty of the TypeScript is that it is already JavaScript and normal JavaScript programs are valid TypeScript programs, which means that you can write normal  JavaScript code and can use typed version of JavaScript whenever you want. TypeScript files are using extension .ts and this will be compiled using a compiler named tsc. The following is a sample program written in  TypeScript greeter.ts 1: class Greeter { 2: greeting: string; 3: constructor (message: string) { 4: this.greeting = message; 5: } 6: greet() { 7: return "Hello, " + this.greeting; 8: } 9: } 10:   11: var greeter = new Greeter("world"); 12:   13: var button = document.createElement('button') 14: button.innerText = "Say Hello" 15: button.onclick = function() { 16: alert(greeter.greet()) 17: } 18:   19: document.body.appendChild(button) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above program is compiling with the TypeScript compiler as shown in the below picture The TypeScript compiler will generate a JavaScript file after compiling the TypeScript program. If your TypeScript programs having any reference to other TypeScript files, it will automatically generate JavaScript files for the each referenced files. The following code block shows the compiled version of plain JavaScript  for the above greeter.ts greeter.js 1: var Greeter = (function () { 2: function Greeter(message) { 3: this.greeting = message; 4: } 5: Greeter.prototype.greet = function () { 6: return "Hello, " + this.greeting; 7: }; 8: return Greeter; 9: })(); 10: var greeter = new Greeter("world"); 11: var button = document.createElement('button'); 12: button.innerText = "Say Hello"; 13: button.onclick = function () { 14: alert(greeter.greet()); 15: }; 16: document.body.appendChild(button); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Tooling Support with Visual Studio TypeScript is providing a plug-in for Visual Studio which will provide an excellent support for writing TypeScript  programs within the Visual Studio. The following screen shot shows the Visual Studio template for TypeScript apps   The following are the few screen shots of Visual Studio IDE for TypeScript apps. Summary TypeScript is Microsoft's solution for writing scalable JavaScript apps which will solve lot of problems involved in larger JavaScript apps. I hope that this solution will attract lot of developers who are really looking for writing maintainable structured code in JavaScript, without losing any productivity. TypeScript lets developers to write JavaScript apps with the help of Static Types, Interfaces, Modules and Classes and also providing better productivity. I am a passionate developer on Node.JS and would definitely try to use TypeScript for building Node.JS apps on the Windows Azure cloud. I am really excited about to writing Node.JS apps by using TypeScript, from my favorite development IDE Visual Studio. You can follow me on twitter at @shijucv

    Read the article

  • Webkit iPhone App : How to dynamically change the user zoom (or scale, pich & zoom) in the viewport

    - by Samuel Michelot
    I use JQTouch for an iPhone app. JQtouch disable by default the possibility to pinch&zoom the page. For one page (containing a big image), i need to enable the pinch & zoom feature. This is easy : var viewport = $("head meta[name=viewport]"); viewport.attr('content', 'width=320, initial-scale=1, maximum-scale=10.0, minimum-scale=1, user-scalable=1'); But after user has play with the pinch & zoom, I need to dynamically reset the zoom (scale) to the default. I tried to reset the viewport: viewport.attr('content', 'width=320, initial-scale=1.0, maximum-scale=1.0, user-scalable=0;'); After calling the above code, it's not possible to zoom anymore (because of user-scalable=0;), but it doesn't change the current scale to the default. I am looking for something like setScale(1), or to change an attribute like current-scale=1 Any idea ?

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Test case as a function or test case as a class

    - by GodMan
    I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >