Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 56/423 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • How I Work: A Cloud Developer's Workstation

    - by BuckWoody
    I've written here a little about how I work during the day, including things like using a stand-up desk (still doing that, by the way). Inspired by a Twitter conversation yesterday, I thought I might explain how I set up my computing environment. First, a couple of important points. I work in Cloud Computing, specifically (but not limited to) Windows Azure. Windows Azure has features to run a Virtual Machine (IaaS), run code without having to control a Virtual Machine (PaaS) and use databases, video streaming, Hadoop and more (a kind of SaaS for tech pros). As such, my designs run the gamut of on-premises, VM's in the Cloud, and software that I write for a platform. I focus on data primarily, meaning that I design a lot of systems that use an RDBMS (like SQL Server or Windows Azure Databases) or a NoSQL approach (MongoDB on Azure or large-scale Key-Value Pairs in Table storage) and even Hadoop and R, and also Cloud Numerics in F#. All that being said, those things inform my choices below. Hardware I have a Lenovo X220 tablet/laptop which I really like a great deal - it's a light, tough, extremely fast system. When I travel, that's the system I take. It has 8GB of RAM, and an SSD drive. I sometimes use that to develop or work at a client's site, on the road, or in the living room when I'm not in my home office. My main system is a GateWay DX430017 - I've maxed it out on RAM, and I have two 1TB drives in it. It's not only my workstation for work; I leave it on all the time and it streams our videos, music and books. I have about 3400 e-books, and I've just started using Calibre to stream the library. I run Windows 8 on it so I can set up Hyper-V images, since Windows Azure allows me to move regular Hyper-V disks back and forth to the Cloud. That's where all my "servers" are, when I have to use an IaaS approach. The reason I use a desktop-style system rather than a laptop only approach is that a good part of my job is setting up architectures to solve really big, complex problems. That means I have to simulate entire networks on-premises, along with the Hybrid Cloud approach I use a lot. I need a lot of disk space and memory for that, and I use two huge monitors on my stand-up desk. I could probably use 10 monitors if I had the room for them. Also, since it's our home system as well, I leave it on all the time and it doesn't travel.   Software For the software for my systems, it's important to keep in mind that I not only write code, but I design databases, teach, present, and create Linux and other environments. Windows 8 - While the jury is out for me on the new interface, the context-sensitive search, integrated everything, and speed is just hands-down the right choice. I've evaluated a server OS, Linux, even an Apple, but I just am not as efficient on those as I am with Windows 8. Visual Studio Ultimate - I develop primarily in .NET (C# and F# mostly) and I use the Team Foundation Server in the cloud, and I'm asked to do everything from UI to Services, so I need everything. Windows Azure SDK, Windows Azure Training Kit - I need the first to set up my Azure PaaS coding, and the second has all the info I need for PaaS, IaaS and SaaS. This is primarily how I get paid. :) SQL Server Developer Edition - While I might install Oracle, MySQL and Postgres on my VM's, the "outside" environment is SQL Server for an RDBMS. I install the Developer Edition because it has the same features as Enterprise Edition, and comes with all the client tools and documentation. Microsoft Office -  Even if I didn't work here, this is what I would use. I've just grown too accustomed to doing business this way to change, so my advice is always "use what works", and this does. The parts I use are: OneNote (and a Math Add-in) - I do almost everything - and I mean everything in OneNote. I can code, do high-end math, present, design, collaborate and more. All my notebooks are on my Skydrive. I can use them from any system, anywhere. If you take the time to learn this program, you'll be hooked. Excel with PowerPivot - Don't make that face. Excel is the world's database, and every Data Scientist I know - even the ones where I teach at the University of Washington - know it, use it, and love it.  Outlook - Primary communications, CRM and contact tool. I have all of my social media hooked up to it, so when I get an e-mail from you, I see everything, see all the history we've had on e-mail, find you on a map and more. Lync - I was fine with LiveMeeting, although it has it's moments. For me, the Lync client is tres-awesome. I use this throughout my day, present on it, stay in contact with colleagues and the folks on the dev team (who wish I didn't have it) and more.  PowerPoint - Once again, don't make that face. Whenever I see someone complaining about PowerPoint, I have 100% of the time found they don't know how to use it. If you suck at presenting or creating content, don't blame PowerPoint. Works great on my machine. :) Zoomit - Magnifier - On Windows 7 (and 8 as well) there's a built-in magnifier, but I install Zoomit out of habit. It enlarges the screen. If you don't use one of these tools (or their equivalent on some other OS) then you're presenting/teaching wrong, and you should stop presenting/teaching until you get them and learn how to show people what you can see on your tiny, tiny monitor. :) Cygwin - Unix for Windows. OK, that's not true, but it's mostly that. I grew up on mainframes and Unix (IBM and HP, thank you) and I can't imagine life without  sed, awk, grep, vim, and bash. I also tend to take a lot of the "Science" and "Development" and "Database" packages in it as well. PuTTY - Speaking of Unix, when I need to connect to my Linux VM's in Windows Azure, I want to do it securely. This is the tool for that. Notepad++ - Somewhere between torturing myself in vim and luxuriating in OneNote is Notepad++. Everyone has a favorite text editor; this one is mine. Too many features to name, and it's free. Browsers - I install Chrome, Firefox and of course IE. I know it's in vogue to rant on IE, but I tend to think for myself a great deal, and I've had few (none) problems with it. The others I have for the haterz that make sites that won't run in IE. Visio - I've used a lot of design packages, but none have the extreme meta-data edit capabilities of Visio. I don't use this all the time - it can be rather heavy, but what it does it does really well. I also present this way when I'm not using PowerPoint. Yup, I just bring up Visio and diagram away as I'm chatting with clients. Depending on what we're covering, this can be the right tool for that. Tweetdeck - The AIR one, not that new disaster they came out with. I live on social media, since you, dear readers, are my cube-mates. When I get tired of you all, I close Tweetdeck. When I need help or someone needs help from me, or if I want to see a picture of a cat while I'm coding, I bring it up. It's up most all day and night. Windows Media Player - I listen to Trance or Classical when I code, and I find music managers overbearing and extra. I just use what comes in the box, and it works great for me. R - F# and Cloud Numerics now allows me to load in R libraries (yay!) and I use this for statistical work on big data loads. Microsoft Math - One of the most amazing, free, rich, amazing, awesome, amazing calculators out there. I get the 64-bit version for quick math conversions, plots and formula-checks. Python - I know, right? Who knew that the scientific community loved Python so much. But they do. I use 2.7; not as much runs with 3+. I also use IronPython in Visual Studio, or I edit in Notepad++ Camstudio recorder - Windows PSR - In much of my training, and all of my teaching at the UW, I need to show a process on a screen. Camstudio records screen and voice, and it's free. If I need to make static training, I use the Windows PSR tool that's built right in. It's ostensibly for problem duplication, but I use it to record for training.   OK - your turn. Post a link to your blog entry below, and tell me how you set your system up.  

    Read the article

  • Procedural Generation of tile-based 2d World

    - by Matthias
    I am writing a 2d game that uses tile-based top-down graphics to build the world (i.e. the ground plane). Manually made this works fine. Now I want to generate the ground plane procedurally at run time. In other words: I want to place the tiles (their textures) randomised on the fly. Of course I cannot create an endless ground plane, so I need to restrict how far from the player character (on which the camera focuses on) I procedurally generate the ground floor. My approach would be like this: I have a 2d grid that stores all tiles of the floor at their correct x/y coordinates within the game world. When the players moves the character, therefore also the camera, I constantly check whether there are empty locations in my x/y map within a max. distance from the character, i.e. cells in my virtual grid that have no tile set. In such a case I place a new tile there. Therefore the player would always see the ground plane without gaps or empty spots. I guess that would work, but I am not sure whether that would be the best approach. Is there a better alternative, maybe even a best-practice for my case?

    Read the article

  • CodeCritics.com: A no nonsense place for coders to critique code and raise awareness of standards and "good coding standards" [closed]

    - by Visionary Software Solutions
    StackOverflow has been a boon for increasing programming knowledge by allowing developers to ask for help and knowledge related to programming. Oftentimes these questions boil down to: This code is broken, fix it I don't know how to do this Is this the best approach (hard question to answer on StackExchange, but democratic) Oftentimes, however, these questions are discussed at a very high level. "I use web services with a proxy client to ..." But, as Grady Booch is fond of saying "the Truth is raw, naked, running code". Those high level descriptions can be accomplished in any ways. Programming is an Art, and there are an infinite number of different ways to do things. But some are better than others. A site devoted to Q&A can help increase knowledge...a site devoted to critique of code can help elevate standards and result in higher quality knowledge. By upvoting the most elegant ways to solve a short, concise problem statement, or just looking at a piece of code and saying "this is ugly, how can we fix it?" we can increase community participation in discussions about the substantive details of an approach: "is my commenting clear? "Is this 3 nested for-loops with a continue that breaks in a special case a good way of building an object?" "Does this extremely generic and polymorphic inheritance hierarchy have issues?") Code is an art/craft and science/engineering artifact. Doesn't it deserve the same type of review treatment as a painting and an experiment? For praising those that provide that moment of zen when looking at exceptionally good code that makes you believe in a better tomorrow, and panning those whose offal is so offensive that were you to meet them on the job you'd say "YOU! GET OUT!!!" Hence, CodeCritics. A collaborative critiquing platform in the style of StackOverflow focused solely on critiquing code that can act as a collaborative code review and assist in the discovery of Design Patterns.

    Read the article

  • Keep Your Eye on the Ball

    - by [email protected]
    With the FIFA World Cup 2010 in South Africa almost a week underway, the soccer fans all around the World are talking about at least 2 things. That typical vuvuzela sound and the new Jabulani ball, saying it moves unpredictably, is difficult to handle and somehow the altitude of the World Cup stadiums also seem to be a contributing factor.(Picture taken from http://www.flickr.com/photos/warrenski/4143923059/ under a Creative Commons license)Although the FIFA states that it hasn't received any official complaints, the end users don't seem to be very happy with this new ball. This brings me to a comparison with IT management and testing. When you're in a situation where you're introducing a new product, in IT terms, introducing a new application, you would like to test all possible scenarios that your end users could be using and experiencing. However, that's a very time and resource intensive process to do for every application change or update.  It's like getting ready for the big game but you have no game plan.That's why a new approach has been developed. One that's based on the 80/20 rule. Testing 80% of the application will cost about 20% of the efforts. The remaining 20% of your application will not be tested before deployment, but monitored with a real user monitoring solution immediately after deployment. These tools track all user experiences, including error messages and the performance and availability metrics from an end user perspective. Should any anomaly occur, you would be able to repair it quickly so you and your end users can get back into the game.These real user sessions can be easily converted into testing scripts, so the 80% of the application testing can be complimented with the remaining 20%.Oracle Enterprise Manager 11g group of products offers both the real user monitoring solution with Oracle Real User Experience Insight, as well as the required testing solution with Oracle Application Testing Suite. Visit our Oracle Enterprise Manager 11g resource center and find out how it's Business-Driven IT Management approach will help you keep your eye on your business ball.Happy World Cup.

    Read the article

  • TDD/Tests too much an overhead/maintenance burden?

    - by MeshMan
    So you've heard it many times from those who do not truly understand the values of testing. Just to start things out, I'm a follower of Agile and Testing... I recently had a discussion about performing TDD on a product re-write where the current team does not practice unit testing on any level, and probably have never heard of the dependency injection technique or test patterns/design etc (we won't even get on to clean code). Now, I am fully responsible for the rewrite of this product and I'm told that attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. I can understand that TDD in a team that does not currently hold any testing experience doesn't sound good, but my argument in this case is that I can teach my practice to those around me, but further more, I know that TDD makes BETTER software. Even if I was to produce the software using TDD, and throw all the tests away on handing it over to a maintenance team, it surely would be a better approach than not using TDD at all from the start? I've been shot down as I've mentioned doing TDD on most projects for a team that have never heard of it. The thought of "interfaces" and strange looking DI constructors scares them off... Can anyone please help me in what is normally a very short conversation of trying to sell TDD and my approach to people? I usually have a very short window of argument before falling at the knees to the company/team.

    Read the article

  • Alternative to Game State System?

    - by Ricket
    As far as I can tell, most games have some sort of "game state system" which switches between the different game states; these might be things like "Intro", "MainMenu", "CharacterSelect", "Loading", and "Game". On the one hand, it totally makes sense to separate these into a state system. After all, they are disparate and would otherwise need to be in a large switch statement, which is obviously messy; and they certainly are well represented by a state system. But at the same time, I look at the "Game" state and wonder if there's something wrong about this state system approach. Because it's like the elephant in the room; it's HUGE and obvious but nobody questions the game state system approach. It seems silly to me that "Game" is put on the same level as "Main Menu". Yet there isn't a way to break up the "Game" state. Is a game state system the best way to go? Is there some different, better technique to managing, well, the "game state"? Is it okay to have an intro state which draws a movie and listens for enter, and then a loading state which loops on the resource manager, and then the game state which does practically everything? Doesn't this seem sort of unbalanced to you, too? Am I missing something?

    Read the article

  • E-Business Integration with SSO using AccessGate

    - by user774220
    Moving away from the legacy Oracle SSO, Oracle E-Business Suite (EBS) came up with EBS AccessGate as the way forward to provide Single Sign On with Oracle Access Manager (OAM). As opposed to AccessGate in OAM terminology, EBS AccessGate has no specific connection with OAM with respect to configuration. Instead, EBS AccessGate uses the header variables sent from the SSO system to create the native user-session, like any other SSO enabled web application. E-Business Suite Integration with Oracle Access Manager It is a known fact that E-Business suite requires Oracle Internet Directory (OID) as the user repository to enable Single Sign On. This is due to the fact that E-Business Suite needs to be registered with OID to for Single Sign On. Additionally, E-Business Suite uses “orclguid” in OID to map the Single Sign On user with the corresponding local user profile. During authentication, EBS AccessGate expects SSO system to return orclguid and EBS username (stored as a user-attribute in SSO user store) in two header variables USER_ORCLGUID and USER_NAME respectively. Following diagram depicts the authentication flow once SSO system returns EBS Username and orclguid after successful authentication: Topic to brainstorm: EBS AccessGate as a generic SSO enablement solution for E-Business Suite AccessGate Even though EBS AccessGate is suggested as an integration approach between OAM and Oracle E-Business Suite, this section attempts to look at EBS AccessGate as a generic solution approach to provide SSO to Oracle E-Business Suite using any Web SSO solution. From the above points, the only dependency on the SSO system is that it should be able to return the corresponding orclguid from the OID which is configured with the E-Business Suite. This can be achieved by a variety of approaches: By using the same OID referred by E-Business Suite as the Single Sign On user store. If SSO System is using a different user store then: Use DIP or OIM to synch orclsguid from E-Business Suite OID to SSO user store Use OVD to provide an LDAP view where orclguid from E-Business Suite OID is part of the user entity in the user store referred by SSO System

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • Technical development decision for my newly established software company

    - by test test
    I have a new software company where I am planning to develop CRM system. So I have settled down on the technological approach I am going to use:- I will use an open source Java-based CRM engine. I will use a third party reporting tool named JasperReports for providing reports capabilities for the CRM. I will develop the interface and any customization which the customer might ask for using asp.net mvc framework since my knowledge and experience are based on asp.net. And I will use the CRM API to integrate my asp.net web application with the Java-based CRM. I have developed a simple demo which integrate these three main components (CRM engine, asp.net application and the reporting tool) and they worked well. But I am afraid of the following risk that I might face if I go with the above approach: I should hire developers with different skills and experience: Developers with Java skills to be able to modify the Java-based CRM and writing plug-ins -when needed- to extend the CRM capabilities. Other developers with asp.net skills to be able to build the application such as application forms, the portal from where users will be able to start the CRM processes, searching capabilities, etc. So might the above point raise some risks when I start hiring a new team and start building the CRM application, OR I am on the right track at this early stage?

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • Value of the HTML5 lang attribute

    - by user359650
    I'm working on a website which will offer localized content following the language+region approach as described on this W3.org page (e.g. fr-CA for Canadian French content, and fr-FR for "French French" content). As we consider content for each language+region to be unique, it is crucial to us that search engines properly identify and serve the content accordingly. By looking up on the Internet (e.g. this question), it appears that most people recommend the use of an ISO639 language code in the HTML lang attribute to describe the content language. Following this recommendation, we would en up using <html lang="fr"> which wouldn't enable the differentiation between the aforementioned language+region combinations. When reviewing the HTML4 specification, it seems that using language+region as a language code would be perfectly OK, as the en-US example is given as one possible value. However I couldn't find any confirmation of this in the HTML5 specification which doesn't seem to provide any example as to the possible allowed values. From there I tried to get a de facto answer by looking at what the web giants are doing. I looked at what Facebook are doing: they offer Candian French and French French versions of their websites with (slightly) different content, whilst the HTML lang value remains the same: fr-CA URL: http://fr-ca.facebook.com HTML lang attribute: <html lang="fr"> translation of the word 'email': courriel fr-FR URL: http://fr-fr.facebook.com/ HTML lang attribute: <html lang="fr"> translation of the word 'email': Adresse électronique Q: What is the recommended/standard way of describing content that was localized using the language+region approach in HTML5 ?

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • Ubuntu One Sync as a File Backup Solution?

    - by Jeff
    I was hoping to utilize Ubuntu One and in particular, the syncing feature within Ubuntu One to provide offsite backup for some of my files. My intention was to mark any of my folders that have important files as 'folders to synchronize' to Ubuntu One. It works great in that whenever an important file is placed in the folder, the file is copied up to Ubuntu One (hence creating a backup). However, if any of these important files are lost or accidently deleted from my computer then due to the synchronization it is also immediately deleted from Ubuntu One. This approach does not work very well to provide backup. On one hand I really like the automatic way in which the synch feature will upload any of my important files to Ubuntu One but on the other hand if I lose the file on my computer it will likely be taken off of the cloud as well (via synchronization). What approach are others taking to backup their important files to Ubuntu One? I didn't want to have to manually upload my important files to Ubuntu One and remember to upload other important files as they are created on my computer. Your thoughts and suggestions are greatly appreciated.

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • ArchBeat Link-o-Rama for November 30, 2012

    - by Bob Rhubart
    Oracle SOA Database Adapter Polling in a Cluster: A Handy Logical Delete Pattern | Carlo Arteaga "Using the SOA database adapter usually becomes easier when the adapter is simply viewed and treated as a gateway between the Oracle SOA composite world and the database world," says Carlo Arteaga. "When viewing the adapter in this light one should come to understand that the adapter is not the ultimate all-in-one solution for database access and database logic needs." OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." When Premature Optimization Isn't | Dustin Marx "Perhaps the most common situations in which I have seen developers make bad decisions under the pretense of 'avoiding premature optimization' is making bad architecture or design choices," says Dustin Marx. Protecting Intranet and Extranet Applications with a Single OAM 11g Deployment | Brian Eidelman Oracle Fusion Middleware A-Team member Brian Eideleman's post, part of the Oracle Access Manager Academy series, explores issues and soluions around setting up a single OAM deployment to protect both intranet and extranet apps. Thought for the Day "Never make a technical decision based upon the politics of the situation, and never make a political decision based upon technical issues." — Geoffrey James Source: SoftwareQuotes.com

    Read the article

  • Functional Languages that compile to Android's Dalvik VM?

    - by Berin Loritsch
    I have a software problem that fits the functional approach to programming, but the target market will be on the Android OS. I ask because there are functional languages that compile to Java's VM, but Dalvik bytecode != Java bytecode. Alternatively, do you know if the dx utility can intelligently convert the .class files generated from functional languages like Scala? Edit: In order to add a bit more helpfulness to the community, and also to help me choose better, can I refine the question a bit? Have you used any alternate languages with Dalvik? Which ones? What are some "gotchas" (problems) that I might run into? Is performance acceptable? By that, I mean the application still feels responsive to the user. I've never done mobile phone development, but I grew up on constrained devices and I'm under no illusion that there is a cost to using non-standard languages with the platform. I just need to know if the cost is such that I should shoe-horn my approach into default language (i.e. apply functional principles in the OOP language).

    Read the article

  • DDD Model Design and Repository Persistence Performance Considerations

    - by agarhy
    So I have been reading about DDD for some time and trying to figure out the best approach on several issues. I tend to agree that I should design my model in a persistent agnostic manner. And that repositories should load and persist my models in valid states. But are these approaches realistic practically? I mean its normal for a model to hold a reference to a collection of another type. Persisting that model should mean persist the entire collection. Fine. But do I really need to load the entire collection every time I load the model? Probably not. So I can have specialized repositories. Some that load maybe a subset of the object graph via DTOs and others that load the entire object graph. But when do I use which? If I have DTOs, what's stopping client code from directly calling them and completely bypassing the model? I can have mappers and factories to create my models from DTOs maybe? But depending on the design of my models that might not always work. Or it might not allow my models to be created in a valid state. What's the correct approach here?

    Read the article

  • Separation of development responsibilities in a new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (All different levels i.e. more senior, intermediate and junior) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task (story) from view - controller - model. QUESTION: For those who have worked in small teams developing MVC projects how have you managed the workload distribution in this situation. I can't imagine the junior would be responsible for building parts of the underlying architecture so would given them responsibility for the view make sense considering we are trying to keep it simple?

    Read the article

  • How can I support objects larger than a single tile in a 2D tile engine?

    - by Yheeky
    I´m currently working on a 2D Engine containing an isometric tile map. It´s running quite well but I'm not sure if I´ve chosen the best approach for that kind of engine. To give you an idea what I´m thinking about right now, let's have a look at a basic object for a tile map and its objects: public class TileMap { public List<MapRow> Rows = new List<MapRow>(); public int MapWidth = 50; public int MapHeight = 50; } public class MapRow { public List<MapCell> Columns = new List<MapCell>(); } public class MapCell { public int TileID { get; set; } } Having those objects it's just possible to assign a tile to a single MapCell. What I want my engine to support is like having groups of MapCells since I would like to add objects to my tile map (e.g. a house with a size of 2x2 tiles). How should I do that? Should I edit my MapCell object that it may has a reference to other related tiles and how can I find an object while clicking on single MapCells? Or should I do another approach using a global container with all objects in it?

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • XNA and C# vs the 360's in order processor

    - by Richard Fabian
    Having read this rant, I feel that they're probably right (seeing as the Xenon and the CELL BE are both highly sensitive to branchy and cache ignorant code), but I've heard that people are making games fine with C# too. So, is it true that it's impossible to do anything professional with XNA, or was the poster just missing what makes XNA the killer API for game development? By professional, I don't mean in the sense that you can make money from it, but more in the sense that the more professional games have phsyics models and animation systems that seem outside the reach of a language with such definite barriers to the intrinsics. If I wanted to write a collision system or fluid dynamics engine for my game, I don't feel like C# offers me the chance to do that as the runtime code generator and the lack of intrinsics gets in the way of performance. However, many people seem to be fine working within these constraints, making their successful games but seemingly avoiding any of the problems by omission. I've not noticed any XNA games do anything complex other than what's already provided by the libraries, or handled by shaders. Is this avoidance of the more complex game dynamics because of teh limitations of C#, or just people concentrating on getting it done? In all honesty, I can't believe that AI war can maintain that many units of AI without some data oriented approach, or some dirty C# hacks to make it run better than the standard approach, and I guess that's partly my question too, how have people hacked C# so it's able to do what games coders do naturally with C++?

    Read the article

  • .NET access to the GPU for compute purposes

    - by Daniel Moth
    In the distant past I talked about GPGPU and Microsoft's then approach of DirectCompute. Since then of course we now have C++ AMP coming out with Visual Studio 11, so there is a mainstream easier way for developers to access the GPU for compute purposes, using C++. The question occasionally arises of how can a .NET developer access the GPU for compute purposes from their C# (or VB) code. The answer is by interoping from the managed code to a native DLL and in the native DLL use C++ AMP. As a long term .NET developer myself, I can tell you this is straightforward. Sure, there could have been a managed wrapper for C++ AMP, but honestly that is the reason we have interop – it doesn't make much sense to invest resources to solve a problem that is already solved (most developer customers would prefer investments in other areas of Visual Studio!). Besides, interoping from C# to C++ is much easier than interoping to some of the other older approaches of GPGPU programming ;-) To help you get started with the interop approach, Igor Ostrovsky has previously shared the "Hello World" version of interoping from C# to C++ AMP in his blog post: How to use C++ AMP from C# …we then were asked specifically about how to interop from C# to C++ AMP in a Metro style application on Windows 8, so Igor delivered again with this post: How to use C++ AMP from C# using WinRT Have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Finding the right back-end developer

    - by John Watson
    I am creating a websites for mobile phone tests. Users can post their own tests and combine it with an existing rating of each product. I do only front-end development and I have no idea about back-end - php, sql, etc. I am not sure I should operate the website without this knowledge but my first thought is to get a professional whom I would give my website to, so that he can do the rest. Only thing is that I need to update it regularly and post my own tests. I don't know how that works and how I should approach this. My understanding is that, after I have finished the website (written in HTML, CSS, JavaScript/jQuery), I would go and find a php programmer and tell me to put it online, make it secure, make sure that the open-source facility (users post their own tests) and that it runs smoothly with the host/server I've chosen. Could you tell me if my approach makes sense (is that the way how to do it)? What should I consider when searching the right back-end developer concerning the right price performance, trust, etc. ?

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >