Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 231/401 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Windows Azure v1.7 Spring Release Today&ndash;New Management Dashboard

    - by ToStringTheory
    Today, Microsoft will be publicly releasing a new version of Azure for public consumption.  The web conference, at http://www.meetwindowsazure.com will be airing at 1 PM PST.  They have already released an update to the Service Dashboard that can be accessed by going to http://manage.windowsazure.com.  I have some images of the new dashboard here that I have gathered and removed any PII from.  Let me know what you think! Images You should be able to click any of the images for a full resolution image. Tutorial The first thing you get after signing in is the tutorial: Landing After the tutorial completes, you get a screen with services that are active on your account on the left, and a list of ALL services (db/blob/SQL Azure) on the right.  I like the quick access to services across any of my subscriptions: Service Information These are images from a running web site with several roles.  I love how easy they have made many of the features: SQL Azure They have given some great quick functionality for looking at your DB information: Storage Here is the basic information that they give you for any storage accounts you have: Adding Services Super quick and easy to add services with the new UI: Conclusion I am EXCITED!  As you may have seen in the left side of my blog, I am an MCPD in Azure Development, and I must say that I am excited to see Microsoft moving forward with the technology and not letting it stagnate.  After as much as I have fought the other Azure dashboard, I like the friendliness and fluidity of this one. The important thing to note about ALL of the images above: this is HTML, not Silverlight.  The responsiveness is FAST on all of the actions I completed, and I believe that this is a big step forward for Azure… So, what do you think?

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Quantifying the Value Derived from Your PeopleSoft Implementation

    - by Mark Rosenberg
    As product strategists, we often receive the question, "What's the value of implementing your PeopleSoft software?" Prospective customers and existing customers alike are compelled to justify the cost of new tools, business process changes, and the business impact associated with adopting the new tools. In response to this question, we have been working with many of our customers and implementation partners during the past year to obtain metrics that demonstrate the value obtained from an investment in PeopleSoft applications. The great news is that as a result of our quest to identify value achieved, many of our customers began to monitor their businesses differently and more aggressively than in the past, and a number of them informed us that they have some great achievements to share. For this month, I'll start by pointing out that we have collaborated with one of our implementation partners, Huron Consulting Group, Inc., to articulate the levers for extracting value from implementing the PeopleSoft Grants solution. Typically, education and research institutions, healthcare organizations, and non-profit organizations are the types of enterprises that seek to facilitate and automate research administration business processes with the PeopleSoft Grants solution. If you are interested in understanding the ways in which you can look for value from an implementation, please consider registering for the webcast scheduled for Friday, December 14th at 1pm Central Time in which you'll get to see and hear from our team, Huron Consulting, and one of our leading customers. In the months ahead, we'll plan to post more information about the value customers have measured and reported to us from their implementations and upgrades. If you have a great story about return on investment and want to share it, please contact either [email protected]  or [email protected]. We'd love to hear from you.

    Read the article

  • Good books or tutorials on building projects without an IDE?

    - by CodexArcanum
    While I'm certainly under no illusions that building software without an IDE is possible, I don't actually know much about doing it well. I've been using graphical tools like Visual Studio or Code::Blocks so long that I'm pretty well lost without them. And that really stinks when I want to change environments or languages. I couldn't really do anything in D until someone made a Visual Studio plugin, and now that I'm trying to do more development on Mac, I can't use D again because the XCode plugins don't work. I'm sick of being lost when I see a .make file and having no idea what I'm supposed to do with a folder full of source files. People can't be compiling them one by one using the console and then linking them one by one. You'd spend more time typing file names than code. So what are the automation and productivity tools of the non-IDE user? How do you manage a project when you're writing all the code in emacs or vim or nano or whatever? I would love it if there was a book or a guide online that spells some of this out.

    Read the article

  • How can I design my classes to include calendar events stored in a database?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • RESTful applications logic and cross resource operations

    - by Gaz_Edge
    I have an RESTful api that allows my users to receive enquiries about their business e.g. 'I would like to book service x on date y. Is this available?'. The api saves this information as a resource to the following URI users/{userId}/enquiries/{enquiryId} The information shown when this resource is retrieved are the standard sort of things you'd expect from an enquiry - email, first_name, last_name, address, message The api also allows customers to be created for a user. The customer has a login and password and also a profile. The following URIs expose these two resources PUT users/{userId}/customers/{customerId} PUT users/{userId}/customers/{customerId}/profile The problem I am having is that I would like to have the ability to allow users to create a customer from an enquiry. For example, the user is able to offer their service on the date requested and will then want to setup a customer with login details etc to allow them to manage the rest of the process. The obvious answer would be to use a URI like users/{userId}/enquiries/{enquiryId}/convert-to-client The problem with this is is that it somewhat goes against a lot of what I've been reading about how to implement REST (specifically from the book Restful Web Services which suggests that URIs should point to resources not operations on resources). The other option would be to get the client application (i.e. the code that calls the api) to handle some of this application logic. This doesn't quite feel right to me. I have implemented in my design that the client app is fairly dumb. It knows just enough to display the results from the API, and does not contain any application logic. Would be great to hear what others views are on the best way of setting this up Am I wrong to have no application logic in the client app? How would I perform this operation purely in the REST api?

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • How to model the components of a non Information System?

    - by Adel C Kod
    So I am working on a project that's related to the Kernel code(specifically related to the TCP/IP stack of the kernel). I need to build some models to describe the functionality and components of my system. Initially I thought about Class Diagram, it can describe the general architecture of my system but it doesn't make sense since my code is VERY structured(written in standard C). I also thought about DFDs, they'd describe the processes of my system, and how the data is flowing. But they contain something which doesn't really fit in; data-storages. I have no databases here(at all). For the functionality, other team members suggested using Activity and Sequence diagrams, which is kinda okay with me, but what about the system components? So basically my question is; I want to describe the components of my system; what do you suggest as a meaningful diagram to follow? (Again, the project is a research low-level systems-oriented project with almost no user-interface at all)

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • Deploying my first website!

    - by test
    I have built a data driven website - an asp.net website, using the entity framework. In my solution I have 4 projects - the web application PresentationLayer, and 3 class libraries - Data Layer, Business and Common Layer. In one of these libraries, Common Layer I have my Model (MyModel.edmx). I have always tested my application on Cassini - Asp.Net Development Server. I have never touched IIS in my life. I bought a domain and hosting on go daddy. My logic tells me to grab my four folders (1 for each layer) and simply move them to the root folder. But I know I'm wrong since then the home page would be mywebsite.org/presentationlayer/default.aspx and second of all I start getting a bunch of errors where files do not load or they are not found. I also know that I need to manage the web.config but I don't have any experience where to start. I'm not sure if this is a problem but I also have a web service included in my presentation layer.

    Read the article

  • Does it make sense to write build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Have you Been Missing the 'About This Record' Functionality on the Customer Form...?

    - by MargaretW
    Do you have fond memories of the 'Help -> About This Record'  functionality that used to be available in the old Customer form - when it was a form, and not a java html screen?  Back in Release 11i, we had the ability to identify when the customer record had last been updated and by whom.  When some forms were replaced by Java HTML screens, you could identify some of this information via the 'About this Page' hyperlink at the bottom left hand corner of the HTML page.  You could enable this by enabling the FND: Diagnostics profile option, but many customers found this had an adverse effect on performance and additionally was not user-friendly.   Our customers tell us that this feature was widely used to identify owner/update information in many business processes, including auditing, customer entry/update, research and testing.  There have been various efforts to revert this feature by customising java pages, but this was not fully successful in some cases.  Oracle Support is happy to announce that this functionality has now been included in the Customer screens in Release 12.2 onwards.   You will be able to query the record history at customer level, at site level, at site address levels and for all tabs relating to the customer. Simply click on the 'Record History' icon, available in the Record History column on a summary screen, or via the same icon on the individual detail screen to display the following information: Last Updated Date: Last Updated By Creation Date Created By Last Update Login

    Read the article

  • Is writing recursive functions hard for you?

    - by null
    I'm taking a computer science course now. I feel it is so hard compared to my polytechnic IT course. I can't understand recursive methods well. How do you manage to get your brain to understand it? When the recursive method returns back, my brain can not trace it nicely. Is there a better way to understand recursion? How do you start learning it? Is it normal that feel that it is very hard at then beginning? Look at the example, I'm very surprised how can I write this if it appears in exam. public class Permute { public static void main(String[] args) { new Permute().printPerms(3); } boolean[] used; int max; public void printPerms(int size) { used = new boolean[size]; max = size; for (int i = 0; i < size; i++) { used[i] = false; } perms(size, ""); } public void perms(int remaining, String res) { if (remaining == 0) { System.out.println(res); } else { for (int i = 0; i < max; i++) { if (!(used[i])) { used[i] = true; perms(remaining - 1, res + " " + i); used[i] = false; } } } } }

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Deal Registration Moves to Oracle Partner Store (OPS)- The Four Action Items for Partners

    - by Richard Lefebvre
    In November 2012, Oracle’s partner deal registration process will move to the Oracle Partner Store (OPS). During this time, OPS will become the single source for partners to register deals, obtain deal status, and place orders. What will partners need to do? 1. Request an OPS Account – If your company is new to OPS the first thing you need to do is request an account (if your company already has an OPS account, go to step 2). It’s important to have the person who will be managing your OPS account make this request as soon as possible. They will be set up as your company’s primary administrator. 2. Set-Up Users in OPS – Setup of users can start immediately, and will be handled by the primary OPS administrator at your company. The process is simple, but all existing users of Global PRM (Partner Relationship Management) deal registration will need to be set up in OPS before November 14, 2012.  3. Review/Action Any Registrations Pending Submission in PRM – Prior to November 14, 2012, all pending registrations should be submitted in the existing PRM system. It is important that this step is complete so registrations will not need to be re-entered when the system is moved to OPS on November 17, 2012. Registrations pending submission are easily identified on the registration listing screen with either “Incomplete” or “Returned to Partner” in the status column.  4. Attend Training – Oracle will offer multiple VAD and VAR training sessions beginning October 29, 2012. It is recommended that all users attend one of these important sessions.  Detailed instructions on each of these tasks can be found on the OPS Information Page. OPS will offer several enhancements to the deal registration process, including: Simplified Registration Form Easier Product Selection Expanded Browser Support Shared Registration Visibility Between VAD and VAR Pre-set Customer Selection From Partner Ordering Base Best Regards, Titina Ott Vice President, Worldwide A&C Systems And Business Processes 

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >