Search Results

Search found 9975 results on 399 pages for 'enterprise architecture'.

Page 92/399 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • When modern computers boot, what initial setup of RAM do they execute, and how does it exactly work?

    - by user272840
    I know the title reeks of confusion, and some of you might assume I am just wondering about how the computer boots in general, but I'm not. But I'll sort this out for you people now: 1.Onboard firmware is how mostly all modern computer devices work, whether or not with EFI/UEFI(even without "onboard firmware", older computers still employed bank switching, or similar methods with snap-in firmware, cartridges, etc.) 2.On startup there is no "programs" running in the traditional sense yet, i.e. no kernel, OS, user-applications; all of the instructions, especially the very first instruction, is specified by the Instruction Pointer, I am guessing. How is the IP/PC/etc. set to first point to an address for a BIOS/firmware/etc. instruction, and how do the BIOS instructions map themself out in memory prior to startup? 3.Aside from MMIO, BIOS uses certain RAM addresses to have instructions. The big ? comes in when I ask this ... how does BIOS do this? Conclusion: I am assuming that with the very first instruction there is an initial hardware setup for BIOS prior to complete OS bootup. What I want to know is if it's hardware engineered to always work this way, if there's another step in this bootup method I am missing, a gap of information I am unaware of, or how this all works from the very first instruction, and the RAM data itself.

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Difference between “system-on-chip” and “CPU”

    - by Tim
    Very confused, in some websites, they have this line: iPhone 5s CPU: Apple A7 other websites saying that: iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core other sources saying that iPhone 5s System-on-chip: Apple 7 CPU: 1.3 GHz 64bit dual core Apple 7 In Wikipedia, it said: The Apple A7 is a 64-bit system on a chip (SoC) designed by Apple Inc. It first appeared in the iPhone 5S, which was introduced on September 10, 2013. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor, the Apple A6. While not the first 64-bit ARM CPU, it is the first to ship in a consumer smartphone or tablet computer. There are 2 sentences: The Apple A7 is a 64-bit system on a chip (SoC) and While not the first 64-bit ARM CPU Wikipedia also said “The A7 features an Apple-designed 64-bit 1.3–1.4 GHz ARMv8-A dual-core CPU, called Cyclone”. So System on chip is also CPU? very confused

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Install correct libraries depending on 64/32 bit

    - by Rich
    I am using Bash to install a customised version of JBoss, and one of the things I would like to do is install the correct version of the Apache Portable Runtime, which is a native binary. This script could be run on both 32 and 64 bit versions of RHEL. What are my options for identifying which version of the APR to install? I think we only have 32bit and x64-based systems here. I would still like to identify i64 systems so that the script can refuse to install on that type of machine. I am aware of using uname -m and grepping /proc/cpuinfo to find out, but was wondering which approach others would recommend?

    Read the article

  • equivalent to CDN but for dynamical content?

    - by ajsie
    so i know for serving static elements (css, js, images, videos etc) you should use CDN since they are spread out throughout the world. but how could i spread out by apache servers? is there an equivalent to CDN but for dynamical pages? or is it the traditional LAMP way. if so, i guess my best option is to find an international hosting provider that hosts in different countries, so the content will be served from the country located nearest the client machine. any suggestions of such hosting providers? or is it best practice to contact different hosting providers in different countries that do not relate to each other. what is the right way to go?

    Read the article

  • Unable to find class 'com.sun.facelets.FaceletViewHandler'

    - by Rafal
    Hi All, I have Richfaces application which I deploy to Glassfish v3. For many weeks (almost) everything works fine, but suddenly today a got following error. I have jsf-facelets-1.1.14.jar dependency in my pom.xml. I have no idea how to fix that. Help!! Source Document: jndi:/server/swmind.rcp.web/WEB-INF/faces-config.xml Cause: Unable to find class 'com.sun.facelets.FaceletViewHandler' at com.sun.faces.config.processor.AbstractConfigProcessor.createInstance(AbstractConfigProcessor.java:275) at com.sun.faces.config.processor.ApplicationConfigProcessor.setViewHandler(ApplicationConfigProcessor.java:527) at com.sun.faces.config.processor.ApplicationConfigProcessor.processViewHandlers(ApplicationConfigProcessor.java:847) at com.sun.faces.config.processor.ApplicationConfigProcessor.process(ApplicationConfigProcessor.java:331) at com.sun.faces.config.processor.AbstractConfigProcessor.invokeNext(AbstractConfigProcessor.java:114) at com.sun.faces.config.processor.LifecycleConfigProcessor.process(LifecycleConfigProcessor.java:116) at com.sun.faces.config.processor.AbstractConfigProcessor.invokeNext(AbstractConfigProcessor.java:114) at com.sun.faces.config.processor.FactoryConfigProcessor.process(FactoryConfigProcessor.java:223) at com.sun.faces.config.ConfigManager.initialize(ConfigManager.java:335) at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:223) at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4591) at com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:535) at org.apache.catalina.core.StandardContext.start(StandardContext.java:5193) at com.sun.enterprise.web.WebModule.start(WebModule.java:499) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:928) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:912) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:694) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1933) at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1605) at com.sun.enterprise.web.WebApplication.start(WebApplication.java:90) at org.glassfish.internal.data.EngineRef.start(EngineRef.java:126) at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:241) at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:236) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:339) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: com.sun.facelets.FaceletViewHandler at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at org.glassfish.web.loader.WebappClassLoader.findClass(WebappClassLoader.java:949) at org.glassfish.web.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1420) at com.sun.faces.util.Util.loadClass(Util.java:203) at com.sun.faces.config.processor.AbstractConfigProcessor.loadClass(AbstractConfigProcessor.java:313) at com.sun.faces.config.processor.AbstractConfigProcessor.createInstance(AbstractConfigProcessor.java:240) ... 50 more

    Read the article

  • Cloned Cached item has memory issues

    - by ioWint
    Hi there, we are having values stored in Cache-Enterprise library Caching Block. The accessors of the cached item, which is a List, modify the values. We didnt want the Cached Items to get affected. Hence first we returned a new List(IEnumerator of the CachedItem) This made sure accessors adding and removing items had little effect on the original Cached item. But we found, all the instances of the List we returned to accessors were ALIVE! Object relational Graph showed a relationship between this list and the EnterpriseLibrary.CacheItem. So we changed the return to be a newly cloned List. For this we used a LINQ say (from item in Data select new DataClass(item) ).ToList() even when you do as above, the ORG shows there is a relationship between this list and the CacheItem. Cant we do anything to create a CLONE of the List item which is present in the Enterprise library cache, which DOESNT have ANY relationship with CACHE?!

    Read the article

  • Regarding Creating Business application using Silverlight

    - by vijay
    Hi, We are in the conceptual phase to create a relatively medium size enterprise business product application using Silver light 4.0, Entity Framework and WCF. 1. Is it adivceable to use Silverlight 4.0 for this enterprise business application development or should we go in for MVC.NET / ASP.NET? 2. We have planned to use REST based WCF service. How complex would it be to write the information back to the REST WCF service? I appreciate and welcome your advice / suggestion. If you need any further details do let me know, i will be happy to share. Thanks in advance.

    Read the article

  • Does Anyone Still Prefer N-Tier Architecture After Having *Shipped* an MVC Application?

    - by Jim G.
    Other SO threads have asked people if they prefer N-Tier or MVC architecture. I'm not looking to continue that debate on this thread. I'm looking for something more specific. My Question: Does Anyone Still Prefer N-Tier Architecture After Having Shipped an MVC Application? Reason for My Question: Before I shipped an MVC web application, I wasn't convinced that it was superior to N-Tier Architecture. Specifically, if better unit testing was the only obvious benefit of MVC, then I saw no reason to switch gears and adopt a new architecture. But after having shipped an MVC application, I can see many benefits (which have been enumerated on other threads).

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • My Upcoming Talk at South Florida&rsquo;s ITPalooza 2012 - NuGet for Open Source and Enterprise Environments

    - by Sam Abraham
    I am very excited to be speaking at IT Palooza next week. As this event’s audience will span professionals working in different facets of Information Technology, I chose to speak on NuGet, an essential tool for any Microsoft Stack developer, as the topic can be of value to managers, architects, IT personnel, as well as developers. For more information on ITPalooza, please visit: http://itpalooza.e2mktg.com/ To register please visit: http://www.fladotnet.com/Reg.aspx?EventID=627   Below are the abstract and speaker bio: Leveraging NuGet for Open Source and Enterprise Environments NuGet is an open source package management system for .NET and Visual Studio that makes it easy to add, update, or remove external libraries in a .Net Project. In this session, we will be covering how NuGet makes open source libraries easily discoverable and usable. We will then move to demonstrate "NuGet for the Enterprise" as we setup a local library repository and configure NuGet to ensure external library versioning is consistent among project developers. Speakers: Sam Abraham is a Microsoft Certified Professional, Microsoft Certified Technology Specialist (MCTS ASP.Net 3.5, 4.0 and Silverlight 4) and Certified ScrumMaster (CSM) striving to leverage proven technology solutions to produce cost-effective, quality software that meets customer needs, timelines and budgets. He is currently a member of the Software Engineering Team at SISCO, the leader in maritime security solutions with customers including Princess, Carnival, and Royal Caribbean Cruise Lines as well as the US Coast Guard. A strong believer in learning through sharing and the value of community fellowship, Sam has been actively involved in the local community as leader of the West Palm Beach Developers' Group, volunteer board member at the International Association for All IT Architects South Florida Chapter (IASA), and former volunteer at the South Florida Chapter of the Project Management Institute (PMI).

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • Can a partition table be edited from a LiveUSB of another architecture?

    - by Eliran Malka
    My purpose is to re-partition a dual-boot machine (running Ubuntu 13.04 / Windows 7), i.e. the current table is as follows: ----------------------------------------------------------- | | extended partition | | | windows |--------------------------------| recovery | | (NTFS) | swap | filesystem | (NTFS) | | | (swap) | (ext4) | | ----------------------------------------------------------- and I want to create an additional ext4 partition under the extended partition, and mount those (the one I created and the 'filesystem' partition) to root and home (/ and /home), such as the new layout will be: ----------------------------------------------------------- | | extended partition | | | windows |--------------------------------| recovery | | (NTFS) | swap | root | home | (NTFS) | | | (swap) | (ext4) | (ext4) | | ----------------------------------------------------------- As the installations on the system and on my Live USB differ in architecture, I want to know: Is it safe to use a 64bit GParted from a Live USB for partitioning a 32bit installation?

    Read the article

  • In Japan? Get the First Opportunity On The New Oracle Enterprise Manager 12c Partner Specialization Exam

    - by Get_Specialized!
    The Oracle Enterprise Manager 12c Essentials Implementation Exam, is now available in Beta for Oracle Partners. If your a Partner in Japan or a Partner attending Oracle OpenWorld 2012 in Tokyo April 4th-6th, take advantage of the opportunity to take the exam, at no charge,  onsite as part of the Oracle PartnerNetwork TestFest  http://www.oracle.com/partners/campaign/events/oowtokyo-testfest-1527771-ja.html  Just ask to take the exam having this number (1Z1-457) Don't happen to be in Japan next week?, no problem, Partners in other parts of the world can apply for one of the limited free beta exam vouchers by sending a request to this email address Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} [email protected] In your email be sure to include your name, business email address, company name and the name of the exam  - Oracle Enterprise Manager 12c Essentials Implementation Exam (1Z1-457)

    Read the article

  • Why Do Spreadsheets Not Work in an Enterprise Planning Environment ?

    - by Mike.Hallett(at)Oracle-BI&EPM
    “Around 93% of managers gather or analyze information in spreadsheets and 54% spend more time gathering information than analyzing it....”  Find answers in this Whitepaper: some extracts below: “Traditional budgeting and planning is a straight jacketed and hierarchical exercise.... how many businesses have planning and reporting processes that are smart, agile and aligned? The networked economy challenges the fundamentals of business organization, for example, where does the front-office stop and does the back-office start?  Is it still meaningful to plan for customer, channel, or product profitability, or is transaction profitability the only measure that counts? “Although conceptually, the idea of enterprise business planning is relatively straightforward it has proven to be illusive, because of over reliance on spreadsheet-bound processes, a lack of control over data quality/management, limited use of advanced planning tools and the cultural impediments that afflict many planning processes. “In the absence of specialist tools, businesses tend to opt for ‘broad brush’ assumptions in financial plans which merely approximate the more granular assumptions used in operational plans. “Most businesses are familiar with the relationship between risk and reward but in assessing potential opportunities and developing business plans rarely acknowledge risks and probability in a formal way. Get your customer to see how they do against the “Enterprise Business Planning Checklist”: get them to read the Whitepaper.

    Read the article

  • ASP.NET MVC Portable Areas - Can they communicate and be used as a plugin-like architecture?

    - by Beton
    I'll get straight to the point: I was wondering if there is a common pattern to use portable areas as a components of a plugin-like architecture. Example: We've got 3 plugins (portable areas) packaged and distributed via NuGet feed. Each of them is following the standard MVC structure (has it own Models, Views and Controllers). Lets say login form, header and footer. What I was wondering if there is a way to make them communicate. For example: when user logs on, login plugin executes it own logic, logs the user and then it updates the state of the header plugin with changes it state accordingly. Thanks in advance.

    Read the article

  • Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

    - by Carl Hörberg
    We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer. Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses? Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?

    Read the article

  • WCF Data Services implementation strategies.

    - by Nix
    Microsoft has done a savvy job of not outlining the actual place for data services in the wonderful world of SOA/Web dev. So my question is simple, are WCF Data Services designed to be used via clients? Or has anyone ever heard of someone using them on the server side? Simple scenario a general layered architecture using BO business objects (parenthesis indicate what is being passed between layers) (XML) WCF Service - (BO)Business Logic - (BO) Dao - Entity Framework or using data services it would be where DS BO are modeled business entities to be used in data service. (XML) WCF Service -(BO) Business Logic - (BO) WCF Data Service - (DS BO)Server I can't see a use for the later, unless there are going to be a lot of cases people would be accessing your data via your Data Service Layer vs the Service layer? Thoughts anyone? I have not seen any mention of using DS from within a Service Layer....

    Read the article

  • Expose url to webservice

    - by Patrick Peters
    In our project we want to query a document management system for a specific document or movie. The dms returns a URL with the document location (for example: http://mydomain.myserver1.share/mypdf.pdf or http://mydomain.myserver2.share/mymovie.avi). We want to expose the document to internet users and intranet users. The requested file can be large (large video files). Our architecture is like: request goes like: webapp1 - webapp2 - webapp3 - dms response goes like: dms - webapp3 - webapp2 - webapp1 webapp1 could be on the internet. I have have been thinking how we can obfusicate the real url from the dms, due to security issues. I have seen implementations from other webapps where the pdf URL was obfusicated by creating a temp file for the requested document that is specific for the session and user. So other users cannot easily guess the documentname of other users. My question: is there a pattern that deals with exposing company/user vulernable data to the public ? Our development is in C# 3.5.

    Read the article

  • JBoss Seam - Jetty - Virtualhosting

    - by Walter White
    Hi all, I am trying to cutback on the memory usage of my server and would like to optimize the architecture. I currently deploy 2 separate web applications to Jetty 6.1.22 that correspond to different virtualhosts. They have pretty much the same application stack except one has fewer components and are styled differently (content, images, css, etc.). If I change my design pattern over to EJB / EAR + 2 WARS embedded, will that lower the memory consumption? Will that give me a single instance of JBoss Seam, Quartz, and all of my components? They must use a different datasource. Thanks, Walter

    Read the article

  • javaEE application layout

    - by arthurdent510
    I've been tasked with writing a new webapp in javaEE, which will be a first for me. I'm familar with the java language, but all of my previous projects have been in .net. So I've downloaded netbeans and I'll probably be using Hibernate, and started digging in. My question is, what's the best way to lay out a javaEE app? Should all data access go through netbeans? I was told to use webservices as much as possible so it's easy to pull data in and out of the system, and I've read through tutorials that create webservices in a seperate web application, as compared to the javaEE enterprise application. Is that really the way it's supposed to work? Just trying to wrap my head around this as compared to what I'm used to in .net. Thanks!

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >