Search Results

Search found 10438 results on 418 pages for 'power architecture'.

Page 98/418 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Can I run my MacBook in clamshell mode without being connected to power?

    - by kch
    Hi, At home, I run my MacBook in clamshell mode (closed lid, external display). This works fine when you're connected to the power adapter, but it doesn't work when running on battery. That's how it's supposed to be and Apple has some kb entry on the issue. But it's also lame. You can prevent the machine from sleeping when closed by running InsomniaX, but then it'll assume the builtin display is still active, so you end up with a two-display setup when you really only want the external. This is obviously less than ideal. So, is there any work around, hack, utility, black magic that I can use to make it run in clamshell mode while strictly on battery power? Also, bonus points for a solution that makes the AC status not affect the machine state at all. (Like, you know, it does normally, when not in clamshell.)

    Read the article

  • How do I make a partition usable in windows 7 after power loss?

    - by user1306322
    A few days ago I was installing some software and power went down. When I rebooted, the partition to which the software was installed was not accessible. Disk manager shows that it's there, but doesn't show type, if it's healthy and gives me an error when trying to read its properties. The problem seems to be common after power loss, people recommend solving it by assigning a letter to the partition via DiskPart utility, but partition isn't listed in my case. I can access that partition with bootable OSs (like bootable Ubuntu or winXP) and all the files are there, but another installation of Windows 7 gives me the same results as the original. I could just copy all data to another disk if there was enough space, but unfortunately the size of partition I'm having problems with is 1.1TB. How do I regain access to the partition in my original Windows 7 installation without losing any data?

    Read the article

  • Computer only booting after POWER ON/OFF 10 times or more?

    - by Jan Gressmann
    Hi fellow geeks, recently my computer started to behave like an old car and won't start up anymore unless I flip the power switch repeatedly. What happens when I power it on: CPU fans spins briefly and very slowly, then it stops Same with GPU fan No BIOS beeps or HDD activity Screen stays black After turning it on and off for like 10 times, it'll eventually boot like normal and run smooth without any problems what-so-ever. But I'm worried it might eventually die completely. Anyone know the most common cause of this? Maybe I should just leave the computer powered on? :)

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ?

    - by snowflake
    I consider a self-described / auto-descriptive service as a good thing in a SOA architecture, since (almost) everything you know to call the service is present in the service contract (such a WSDL). Sample of a non self-described service for me is Facebook Query Language (FQL http://wiki.developers.facebook.com/index.php/FQL), or any web service exchanging XML flow in a one String parameter for then parsing XML and performing treatments. Last ones seem further more technically decoupled, since technically you can switch implementations without technical impact on the caller, handling compatibility between implementations/versions at a business level. On the other side, having no strong interface (diluted into the service and its version), make the service tightly coupled to the existing implementation (more difficulty to interchange the service and to ensure perfect compatibility). This question is related to http://stackoverflow.com/questions/2503071/how-to-implement-loose-coupling-with-a-soa-architecture So, are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ? What are the impacts regarding ESBs ? Any pointer will be appreciated.

    Read the article

  • Is N-Tier Architecture only the physical seperation of code or there is something more to it?

    - by Starx
    Is N-Tier Architecture only the physical separation of code or there is something more to it? What sorts of coding do we put in Presentation Layer, Application Layer, Business Logic Layer, User Interface Logic, Data Access Layer, Data Access Object,? Can all the layers mentioned above give a fully functional N-tier architecture? For example: Whenever a user clicks a button to load a content via AJAX, they we do coding to fetch a particular HTML output and then update the element, so does this JavaScript coding also lie on a different tier? Because If N-tier Architecture is really about physical separation of code, than i think Its better to separate the JavaScript coding also.

    Read the article

  • MVVM: How to handle interaction between nested ViewModels?

    - by Dan Bryant
    I'm been experimenting with the oft-mentioned MVVM pattern and I've been having a hard time defining clear boundaries in some cases. In my application, I have a dialog that allows me to create a Connection to a Controller. There is a ViewModel class for the dialog, which is simple enough. However, the dialog also hosts an additional control (chosen by a ContentTemplateSelector), which varies depending on the particular type of Controller that's being connected. This control has its own ViewModel. The issue I'm encountering is that, when I close the dialog by pressing OK, I need to actually create the requested connection, which requires information captured in the inner Controller-specific ViewModel class. It's tempting to simply have all of the Controller-specific ViewModel classes implement a common interface that constructs the connection, but should the inner ViewModel really be in charge of this construction? My general question is: are there are any generally-accepted design patterns for how ViewModels should interact with eachother, particularly when a 'parent' VM needs help from a 'child' VM in order to know what to do?

    Read the article

  • Difference between "machine hardware" and "hardware platform"

    - by Adil
    My Linux machine reports "uname -a" outputs as below:- [root@tom i386]# uname -a Linux tom 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:34:33 EDT 2009 i686 i686 i386 GNU/Linux [root@tom i386]# As per man page of uname, the entries "i686 i686 i386" denotes:- machine hardware name (i686) processor type (i686) hardware platform (i386) Additional info: [root@tom i386]# cat /proc/cpuinfo <snip> vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU 5148 @ 2.33GHz stepping : 6 cpu MHz : 2328.038 cache size : 4096 KB </snip>

    Read the article

  • ServiceLocator not initialized in Tests project

    - by Carl Bussema
    When attempting to write a test related to my new Tasks (MVC3, S#arp 2.0), I get this error when I try to run the test: MyProject.Tests.MyProject.Tasks.CategoryTasksTests.CanConfirmDeleteReadiness: SetUp : System.NullReferenceException : ServiceLocator has not been initialized; I was trying to retrieve SharpArch.NHibernate.ISessionFactoryKeyProvider ---- System.NullReferenceException : Object reference not set to an instance of an object. at SharpArch.Domain.SafeServiceLocator1.GetService() at SharpArch.NHibernate.SessionFactoryKeyHelper.GetKeyFrom(Object anObject) at SharpArch.NHibernate.NHibernateRepositoryWithTypedId2.get_Session() at SharpArch.NHibernate.NHibernateRepositoryWithTypedId2.Save(T entity) at MyProject.Tests.MyProject.Tasks.CategoryTasksTests.Setup() in C:\code\MyProject\Solutions\MyProject.Tests\MyProject.Tasks\CategoryTasksTests.cs:line 36 --NullReferenceException at Microsoft.Practices.ServiceLocation.ServiceLocator.get_Current() at SharpArch.Domain.SafeServiceLocator1.GetService() Other tests which do not involve the new class (e.g., generate/confirm database mappings) run correctly. My ServiceLocatorInitializer is as follows public class ServiceLocatorInitializer { public static void Init() { IWindsorContainer container = new WindsorContainer(); container.Register( Component .For(typeof(DefaultSessionFactoryKeyProvider)) .ImplementedBy(typeof(DefaultSessionFactoryKeyProvider)) .Named("sessionFactoryKeyProvider")); container.Register( Component .For(typeof(IEntityDuplicateChecker)) .ImplementedBy(typeof(EntityDuplicateChecker)) .Named("entityDuplicateChecker")); ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container)); } }

    Read the article

  • Reporting a WCF application's status to F5's Big IP products

    - by ng5000
    In a Windows Server 2003 environment with a self hosted .Net 3.5/WCF application, how can an application report its status to a BigIP Local Traffic Manager? Example: One of my services errors. My custom WCF application hosting software (written because Windows Server 2008 is not yet available and I'm using WCF TCP bindings) detects this and wants to report itself as down until it can recover the errant service. It needs to report itself as down to the BigIP LTM so that it is no longer sent client originated requests.

    Read the article

  • Component based game engine design

    - by a_m0d
    I have been looking at game engine design (specifically focused on 2d game engines, but also applicable to 3d games), and am interested in some information on how to go about it. I have heard that many engines are moving to a component based design nowadays rather than the traditional deep-object hierarchy. Do you know of any good links with information on how these sorts of designs are often implemented? I have seen evolve your hierarchy, but I can't really find many more with detailed information (most of them just seem to say "use components rather than a hierarchy" but I have found that it takes a bit of effort to switch my thinking between the two models). Any good links or information on this would be appreciated, and even books, although links and detailed answers here would be preferred.

    Read the article

  • Modularizing web applications

    - by Matt
    Hey all, I was wondering how big companies tend to modularize components on their page. Facebook is a good example: There's a team working on Search that has its own CSS, javascript, html, etc.. There's a team working on the news feed that has its own CSS, javascript, html, etc... ... And the list goes on They cannot all be aware of what everyone is naming their div tags and whatnot, so what's the controller(?) doing to hook all these components in on the final page?? Note: This doesn't just apply to facebook - any company that has separate teams working on separate components has some logic that helps them out. EDIT: Thanks all for the responses, unfortunately I still haven't really found what I'm looking for - when you check out the source code (granted its minified), the divs have UIDs, my guess is that there is a compilation process that runs through and makes each of the components unique, renaming divs and css rules.. any ideas? Thanks all! Matt Mueller

    Read the article

  • ASP.NET MVC 2 matches correct area route but generates URL to the first registered area instead.

    - by Sandor Drieënhuizen
    I'm working on a S#arpArchitecture 1.5 project, which uses ASP.NET MVC 2. I've been trying to get areas to work properly but I ran into a problem: The ASP.NET MVC 2 routing engine matches the correct route to my area but then it generates an URL that belongs to the first registered area instead. Here's my request URL: /Framework/Authentication/LogOn?ReturnUrl=%2fDefault.aspx I'm using the Route Tester from Phil Haack and it shows: Matched Route: Framework/{controller}/{action}/{id} Generated URL: /Data/Authentication/LogOn?ReturnUrl=%2FDefault.aspx using the route "Data/{controller}/{action}/{id}" That's clearly wrong, the URL should point to the Framework area, not the Data area. This is how I register my routes, nothing special there IMO. private static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } The area registration classes all look like this. Again, nothing special. public class FrameworkAreaRegistration : AreaRegistration { public override string AreaName { get { return "Framework"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Framework_default", "Framework/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional }); } }

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • [Principles] Concrete Type or Interface for method return type?

    - by SDReyes
    In general terms, whats the better election for a method's return type: a concrete type or an interface? In most cases, I tend to use concrete types as the return type for methods. because I believe that an concrete type is more flexible for further use and exposes more functionality. The dark side of this: Coupling. The angelic one: A concrete type contains per-se the interface you would going to return initially, and extra functionality. What's your thumb's rule? Is there any programming principle for this? BONUS: This is an example of what I mean http://stackoverflow.com/questions/491375/readonlycollection-or-ienumerable-for-exposing-member-collections

    Read the article

  • .Net opening and clossing database connections

    - by Dan
    I currently designed the data access portion of our framework so that every time a business object needed to interact with the database it would have to open a connection, invoke the data access layer (to execute the query), and then close the connection. Then if it needed to run in a transaction it would open the connection, begin the transaction, invoke the data access layer (to execute the query) and then commit the transaction, close the transaction, and finally close the connection. I did it this way with the mindset of open late and close early...but what if I needed to call other BO's to submit data in a single transaction? Is there a better way to handle opening and closing connections as well as working with transactions? I'm a rookie at architecting applications so I hope I'm not doing this wrong...any help is appreciated.

    Read the article

  • Client-Server Networking Between PHP Client and Java Server

    - by Muhammad Yasir
    Hi there, I have a university project which is already 99% completed. It consists of two parts-website (PHP) and desktop (Java). People have their accounts on the website and they wish to query different information regarding their accounts. They send an SMS which is received by desktop application which queries database of website (MySQL) and sends the reply accordingly. This part is working superbly. The problem is that some times website wishes to instruct the desktop application to send a specific SMS to a particular number. Apparently there seems no way other than putting all the load to the DB server... This is how I made it work. Website puts SMS jobs in a specific table. Java application polls this table again and again and if it finds a job, it executes it. Even this part is working correctly but unfortunately it is not acceptable by my university to poll the DB like this. :( The other approach I could think of is to use client-server one. I tried making Java server and its PHP client. So that whenever an SMS is to be sent, the website opens a socket connection to desktop application and sends two strings (cell # and SMS message). Unfortunately I am unable to do this. I was successfully to make a Java server which works fine when connected by a Java client, similarly my PHP client connects correctly to a PHP server, but when I try to cross them, they start hating each other... PHP shows no error but Java gives StreamCorruptedException when it tries to read header of input stream. Could someone please tell what I can try to make PHP client and Java server work together? Or if the said purpose can be achieved by another means, how? Regards, Yasir

    Read the article

  • Thrift,.NET,Cassandra - Is this is right combination?

    - by Vadi
    I've been evaluating technology stack for developing a social network based application. Below are the stack I think could well suitable for this application type of application: GUI -- ASP.NET MVC, Flash (Flex) Business Services -- Thrift based services One of the advantage of using Thrift is to solve scaling problems that will come in future when the user base increases rapidly. All the business logic can be exposed as a services using REST,JSON etc., This also allows us to go with C++ or Erlang based services when situation demands. Database -- mySQL, CasSandara mySQL can be used for storing the data which needs to be persisted. Cassandara will be used for storing global identifiers to the persisted data. Since Cassandara is also very good at scaling by introducing more nodes this will leverage Thrift based services as well. And also there is native support between Cassandara and Thrift Cache Server -- Memcached Any requests from Business Services will only talk to Memcached if any non-dirty data is required, otherwise there will be some background jobs that will invalidate the cache from database. The question is: Is the Thrift which is open-sourced one is production-ready? Is it the right stack for services layer to choose when the application (GUI) is primarily gets developed in ASP.NET and DB is mysql? Is there any other caveats that anyone here experienced? One of the main objective behind this stack is to easily scale up with more nodes and also this helps us to use Linux boxes, it will reduce our cost significantly Thoughts please ..

    Read the article

  • Structuring projects & dependencies of large winforms applications in C#

    - by Benjol
    UPDATE: This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though. How do you structure a very large application? Multiple smallish projects/assemblies in one big solution? A few big projects? One solution per project? And how do you manage dependencies in the case where you don't have one solution. Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself). I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear. (Context: winforms, not web) EDIT: (If you think this is a different question, leave me a comment) It seems to me that there are interdependencies between: Project/Solution structure for an application Folder/File structure Branch structure for source control (if you use branching) But I have great difficulty separating these out to consider them individually, if that is even possible. I have asked another related question here.

    Read the article

  • Why is IoC / DI not common in Python?

    - by tux21b
    In Java IoC / DI is a very common practice which is extensively used in web applications, nearly all available frameworks and Java EE. On the other hand, there are also lots of big Python web applications, but beside of Zope (which I've heard should be really horrible to code) IoC doesn't seem to be very common in the Python world. (Please name some examples if you think that I'm wrong). There are of course several clones of popular Java IoC frameworks available for Python, springpython for example. But none of them seems to get used practically. At least, I've never stumpled upon a Django or sqlalchemy+<insert your favorite wsgi toolkit here> based web application which uses something like that. In my opinion IoC has reasonable advantages and would make it easy to replace the django-default-user-model for example, but extensive usage of interface classes and IoC in Python looks a bit odd and not »pythonic«. But maybe someone has a better explanation, why IoC isn't widely used in Python.

    Read the article

  • realtime logging

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Database per application VS One big database for all applications

    - by Jorge Vargas
    Hello, I'm designing a few applications that will share 2 or 3 database tables and all of the other tables will be independent of each app. The shared databases contain mostly user information, and there might occur the case where other tables need to be shared, but that's my instinct speaking. I'm leaning over the one database for all applications solution because I want to have referential integrity, and I won't have to keep the same information up to date in each of the databases, but I'm probably going to end with a database of 100+ tables where only groups of ten tables will have related information. The database per application approach helps me keep everything more organized, but I don't know a way to keep the related tables in all databases up to date. So, the basic question is: which of both approaches do you recommend? Thanks, Jorge Vargas.

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >