Search Results

Search found 1068 results on 43 pages for 'strategies'.

Page 10/43 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • What are the strategies to become a good open source developer?

    - by u3050
    I always hear that involving with open source projects is good for career and the more (good) open source you release, the closer you will be to getting your dream job before you've even had an interview.I am expert in Java and I am trying to become fluent in Scala. I always think about getting involved in open source development in Java/Scala but the following confusions stopping me to do so. How/where do I start in open source development projects in GitHub etc? Or How to find active/busy open source development projects? How to find an area where improvement is required or enhancement required in such projects? It looks too complex in the first analysis or its pretty hard to find such opportunities. What are the common strategies to follow if I want to become hobbyist/free time open source developer?People who have experience in open source development please share your learnings/expertise from scratch.Thanks in advance.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • What strategies you followed to keep your programming skills fresh during a long break?

    - by TRoh
    After being away from development for more than a year, I find it challenging to join back the work force, and I can feel the rustiness. I wonder what you have done to either keep your skills fresh during such periods or how you gained back the skills you might have forgotten? I understand coding is a great way to become more competent, but how do you start getting more involved in it while you are not working as a developer?

    Read the article

  • What are the best strategies for selling Android apps?

    - by Rob S.
    I'm a young developer hoping to sell my apps I made for Android soon. My applications are basically 99% finished so I'm investigating what would be the best marketing strategy to use to sell my apps. I'm sure the brilliant minds here can give me some great advice. I'm particularly interested in your thoughts on the following points (especially from experienced Android developers): Is it more profitable to sell an app for free with ads or to sell an app without ads for a price? Perhaps a combination of a free ad version and a paid ad-free version? If you give away an app for free with ads on it is it ethical to decline bending over backwards to support it? How much does piracy actually affect potential sales? Should any effort be put towards preventing it? Can you still make a profit off your application if you make it open source? Could you perhaps make more of a profit from the attention you would get by doing so? Is Google's Android Marketplace really the best place to release Android apps? It is worthwhile enough to maintain a developer blog or website to keep users updated on your development progress and software releases? Any other suggestions you could give me to maximize profit meanwhile keeping users happy and coming back for more would also be greatly appreciated. While I appreciate general tips and tricks, I'd like to ask that if possible you please go the extra step and show how they specifically apply to selling Android apps. Marketing statistics, developer retrospect, and any additional experience you can share from your time selling Android apps is what I would love to see most. Thank you very much in advance for your time. I truly appreciate all the responses I receive.

    Read the article

  • Legitimate SEO Strategies - Which to Follow and Not to Follow?

    There are various SEO techniques which can help your business to get recognition on the internet. These techniques are mainly divided into two categories i.e. legitimate search engine optimization and illegitimate search engine optimization. Using legitimate search engine optimization techniques will guarantee a top spot and there will be nothing at risk. Whereas the illegitimate SEO has many risks attached to it.

    Read the article

  • Continuous Integration, what are the strategies to manage binary content?

    - by sebas
    Currently we are testing various configurations between Feature Branching and CI with Feature toggling. I can see there are several viable options out there for the code, but I also know that CI totally relies on the possibility to merge the code. So I wonder, how do you manage CI with binary data, like art assets? I can also see another problem: all the code can be tested before to commit, I can even validate the data before to commit, but how can I test the art?! Should I use another methodology for art content?

    Read the article

  • What are some strategies for maintaining a common database schema with a team of developers and no D

    - by Mahmoud Abdelkader
    I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours is seems reasonable. Thanks!

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • What free tools or strategies can help debug a multi-threading corruption bug?

    - by WilliamKF
    I have a client server application with multi-threading. The server side is failing with a std::list getting corrupted resulting in a SEGV. I suspect that there is some kind of cross thread timing issue going on where the two threads are updating the std::list at the same time and causing it to be corrupted. Please suggest free tools to track this down or strategies that might be helpful.

    Read the article

  • Common strategies to deal with rounding errors in currency-intensive soft?

    - by Max
    What is your advice on: compensation of accumulated error in bulk math operations on collections of Money objects. How is this implemented in your production code? (things like variable rounding, etc...) theory behind rounding in accountancy. any literature on topic. I currently read Fowler. He mentions Money type, but says nothing on strategies. Older posts on money-rounding (here, and here) do not provide a details and formality I need. Thanks for help.

    Read the article

  • Strategies for avoiding SQL in your Controllers... or how many methods should I have in my Models?

    - by Keith Palmer
    So a situation I run into reasonably often is one where my models start to either: Grow into monsters with tons and tons of methods OR Allow you to pass pieces of SQL to them, so that they are flexible enough to not require a million different methods For example, say we have a "widget" model. We start with some basic methods: get($id) insert($record) update($id, $record) delete($id) getList() // get a list of Widgets That's all fine and dandy, but then we need some reporting: listCreatedBetween($start_date, $end_date) listPurchasedBetween($start_date, $end_date) listOfPending() And then the reporting starts to get complex: listPendingCreatedBetween($start_date, $end_date) listForCustomer($customer_id) listPendingCreatedBetweenForCustomer($customer_id, $start_date, $end_date) You can see where this is growing... eventually we have so many specific query requirements that I either need to implement tons and tons of methods, or some sort of "query" object that I can pass to a single -query(query $query) method... ... or just bite the bullet, and start doing something like this: list = MyModel-query(" start_date X AND end_date < Y AND pending = 1 AND customer_id = Z ") There's a certain appeal to just having one method like that instead of 50 million other more specific methods... but it feels "wrong" sometimes to stuff a pile of what's basically SQL into the controller. Is there a "right" way to handle situations like this? Does it seem acceptable to be stuffing queries like that into a generic -query() method? Are there better strategies?

    Read the article

  • Strategies for when to use properties and when to use internal variables on internal classes?

    - by Edward Tanguay
    In almost all of my classes, I have a mixture of properties and internal class variables. I have always chosen one or the other by the rule "property if you need it externally, class variable if not". But there are many other issues which make me rethink this often, e.g.: at some point I want to use an internal variable from outside the class, so I have to refactor it into a property which makes me wonder why I don't just make all my internal variables properties in case I have to access them externally anyway, since most classes are internal classes anyway it aren't exposed on an API so it doesn't really matter if the internal variables are accessible from outside the class or not but then since C# doesn't allow you to instantiate e.g. List<string> property in the definition, then these properties have to be initialized in every possible constructor, so these variables I would rather have internal variables just to keep things cleaner in that they are all initialized in one place C# code reads more cleanly if constructor/method parameters are camel case and you assign them to pascal case properties instead of the ambiguity of seeing "templateIdCode" and having to look around to see if it is a local variable, method parameter or internal class variable, e.g. it is easier when you see "TemplateIdCode = templateIdCode" that this is a parameter being assigned to a class property. This would be an argument for always using only properties on internal classes. e.g.: public class TextFile { private string templateIdCode; private string absoluteTemplatePathAndFileName; private string absoluteOutputDirectory; private List<string> listItems = new List<string>(); public string Content { get; set; } public List<string> ReportItems { get; set; } public TextFile(string templateIdCode) { this.templateIdCode = templateIdCode; ReportItems = new List<string>(); Initialize(); } ... When creating internal (non-API) classes, what are your strategies in deciding if you should create an internal class variable or a property?

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • What is preferred strategies for cross browser and multiple styled table in CSS?

    - by jitendra
    What is preferred strategies for cross browser and multiple styled table in CSS? in default css what should i predefined for <table>, td, th , thead, tbody, tfoot I have to work in a project there are so many tables with different color schemes and different type of alignment like in some table , i will need to horizontally align data of cell to right, sometime left, sometime right. same thing for vertical alignment, top, bottom and middle. some table will have thin border on row , some will have thick (same with column border). Some time i want to give different background color to particular row or column or in multiple row or column. So my question is: What code should i keep in css default for all tables and how to handle table with different style using ID and classes in multiple pages. I want to do every presentational thing with css. How to make ID classes for everything using semantic naming ? Which tags related to table can be useful? How to control whole tables styling from one css class?

    Read the article

  • What are the general strategies for the server of an FPS multiplayer game to update its clients?

    - by Hooray Im Helping
    A friend and I were having a discussion about how a FPS server updates the clients connected to it. We watched a video of a guy cheating in Battlefield: Bad Company 2 and saw how it highlighted the position of enemies on the screen and it got us thinking. His contention was that the server only updates the client with information that is immediately relevant to the client. I.e. the server won't send information about enemy players if they are too far away from the client or out of the client's line of sight for reasons of efficiency. He was unsure though - he brought up the example of someone hiding behind a rock, not able to see anyone. If the player were suddenly to pop up where he had three players in his line of sight, there would be a 50ms delay before they were rendered on his screen while the server transmitted the necessary information. My contention was the opposite: that the server sends the client all the information about every player and lets the client sort out what is allowed and what isn't. I figured it would actually be less expensive computationally for the server to just send everything to the client and let the client do the heavy lifting, so to speak. I also figured this is how cheat programs work - they intercept the server packets, get the location of enemies, then show them on the client's view. So the question: What are some general policies or strategies a modern first person shooter server employs to keep its clients updated?

    Read the article

  • Which Single Source Publishing tools and strategies are available?

    - by Another Registered User
    I'm about to write a 1000-Pages Documentation about a huge programming framework. The goal is to bring this documentation online into an web platform, so that online users can search through it and read it online. At the same time, the text has to be made public in PDF format for download. And at the same time, the whole thing needs to go into a printed book as well (print on demand, they want a giant PDF file with the whole book). The PDF files: The whole content is divided into several chapters. Every chapter will be available as a standalone PDF eBook. And finally, all chapters will be available in one huge printed book. Is LaTeX capable for something like that? Can it be used for Single Source Publishing? Or would I have to take a look at other technologies like DocBook, etc.?

    Read the article

  • What are the strategies available to minimise badblocks on an encrypted partition?

    - by David Andreoletti
    Let me explain my backup strategy and the problem I am facing. My current backup strategy: Open encrypted container and execute Carbon Copy Cleaner on it at least once a week. Rotate backup disks. Problem: I have an Truecrypt partition on my 1st external hard disk. I recently found out that some files on this encrypted partition cannot be read due to bad blocks (reported by Antonio Diaz's GNU 'ddrescue'). My backup strategy is ineffective in this scenario because bad blocks are discovered during backup. Possible strategy Strategy #0: Have the encrypted partition over a RAID 1 with 2 disks. Is this a suitable strategy ? Strategy #1: Do you think of any other one ? Environment: Mac OS X 10.8 External 2.5" hard disk (SATA) No RAID

    Read the article

  • What are optimal strategies for using mapreduce and other applications on the same server?

    - by user45532
    I have two applications that I need to run continuously to process data. 1.) An app that processes and aggregates information from sources 2.) A mapreduce workflow* that processes the above info I've thought about either getting vps hosting or getting my own inexpensive server and using xen to split the resources of the server. Getting a quad core box with 2 GB of Ram seems a lot cheaper than the grid options I've seen at slicehost, rackspace and others...

    Read the article

  • Java Generics, JPA 2, J2EE, JSF 2, GWT, Ajax, Oracle's Java Strategies, Flex, iPhone, Agile ALM, Gra

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Java Conference and Workshops has announced the complete program of over 50 sessions on the present and future of the Java language and VM, how they are evolving to meet the community's ever-changing needs, and some of the cutting-edge tools, technologies & techniques used for building robust enterprise Java applications today. The GIDs.Java track at Great Indian Developer Summit takes place 22 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Java at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 22 and 23 April 2010, GIDS.Java offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Java "Pointy Haired Bosses and Pragmatic Programmers" is led by Dr. Venkat Subramaniam. He speaks about how each of us has a professional responsibility to be objective and make decisions that will help us and our teams be productive and deliver results. Venkat will pick on some fallacies, lay down facts, and discuss how to stay professional and objective in our daily efforts. The second keynote of the day explains the practical features that make the Cloud so interesting, and why everyone should start using it in their everyday life. Simone Brunozzi, Amazon Web Services Technology Evangelist, will detail technical examples, business details all mixed with a lot of Italian humor to ensure audience enjoy this talk without a single line of code. The third keynote of the day gives an exciting overview of directions in the Java space for Oracle, featuring concrete signs of Oracles heavy investment, a clear concise strategy overview, and deep dives into some of the most interesting pieces of technology being developed in the Java Platform Group today; such as JavaEE, JDK7, JavaFX, and our exciting new visual tools. Featuring demos by a Java evangelism team star, Simon Ritter, this talk takes you top to bottom in Java Technology. Featured talks at GID.Web include: Good, Bad, and Ugly of Java Generics, Venkat Subramaniam Pure Java Ajax: An Overview of GWT 2.0, Marty Hall How JPA 2.0 Makes a Good Thing Even Better, Mike Keith Building Enterprise RIAs with Adobe Flex and Java, Sujit Reddy G Integrated Ajax Support in JSF 2.0, Marty Hall Design Patterns in Java and Groovy, Venkat Subramaniam A Gentle Introduction to iPhone and Obj-C for Java Developers, Matthew McCullough Cloud Computing: Azure for Java Developers, Janakiram MSV Ajax Support in the Prototype JavaScript Library, Marty Hall First steps to IT Heaven Through the Cloud. Part III: .Java, Simone Brunozi Building Web 2.0 User Interfaces for Web Service Models using JSF, Frank Nimphius and Jobinesh P Acceptance Test Driven Development, John Tobin and Mohammed Mohsinali Architecting Your Java Applications for the Cloud, Praveen Srivatsa Effective Java, Venkat Subramaniam The Amazing Groovy Weight-loss Plan, Scott Davis Enterprise Modeling - from Conceptual Planning to Technical Blueprints, J Sripad Java Collections Renaissance, Donald Raab and Vlad Zakharov Power 7 and IBM J9VM, Himanshu Goyal A Whistle-stop Tour of Maven 3.0, Matthew McCullough Mass Volume Opportunities for Java Developers, Jouko Nuottila Emerging Technology Complex Event Processing, Duvvuri Srinivas Agile ALM for Distributed Development, Karthi Swaminathan Dim Sum Grails - A Sampler of Practical Non Database-Driven Grails Applications, Scott Davis Diagnosing Performance Bottlenecks in J2EE, Deepak Kaul Business Driven Identity Management, Suneet Agera Combining Java EE with OSGi using Eclipse Gemini, Mike Keith Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >