Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 7/909 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • help me to choose between two software architecture

    - by alex
    // stupid title, but I could not think anything smarter I have a code (see below, sorry for long code but it's very-very simple): namespace Option1 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public AuxClass1 AuxClass { get { return _auxClass; } set { _auxClass = value; } } public MainClass() { _auxClass = new AuxClass1(); } } } namespace Option2 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public string Field1 { get { return _auxClass.Field1; } set { _auxClass.Field1 = value; } } public void Method1() { _auxClass.Method1(); } public void Method2() { _auxClass.Method2(); } public MainClass() { _auxClass = new AuxClass1(); } } } class Program { static void Main(string[] args) { // Option1 Option1.MainClass mainClass1 = new Option1.MainClass(); mainClass1.AuxClass.Field1 = "string1"; mainClass1.AuxClass.Method1(); mainClass1.AuxClass.Method2(); // Option2 Option2.MainClass mainClass2 = new Option2.MainClass(); mainClass2.Field1 = "string2"; mainClass2.Method1(); mainClass2.Method2(); Console.ReadKey(); } } What option (option1 or option2) do you prefer ? In which cases should I use option1 or option2 ? Is there any special name for option1 or option2 (composition, aggregation) ?

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • Bringing in New Architecture During Maintenance on Legacy Systems

    - by Mike L.
    I have been tasked with adding some new features to a legacy ASP.NET MVC2 project. The codebase is a disaster and I want to write these new features with some thought behind the implementation and not just throw these new features into the mess. I would like to introduce things like dependency injection and the orchestrator pattern; just to the code that I am going to write. I don't have enough time to try to refactor the entire system. Is it OK to not be consistent with the rest of the codebase and add new features following different design principles? Should I not introduce new patterns and just get the features implemented? I feel like it might be confusing to the next person to see parts of the system using a design that other parts are not following.

    Read the article

  • Suggestions needed on an architecture for a multiple clients and customisable web application

    - by ValidfroM
    Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Modular Web App Network Architecture

    - by nairware
    Assuming that I am dealing with dedicated physical servers or VPSs, is it conceivable and does it make sense to have distinct servers setup with the following roles to host a web application? Reverse Proxy Web server Application server Database server Specific points of interest: I am confused how to even separate the web and application servers. My understanding was that such 3-tier architectures were feasible. It is unclear to me if the app server would reside directly between the web and database server, or if the web server could directly interact with the database as well. The app server could either do the computational heavy-lifting on behalf of the app server or it could do heavy-lifting plus control all of the business logic (as implied in the diagram above, thus denying the web server of direct database access). I am also unsure what role the reverse proxy (ex. nginx) could and should fulfill as a web server, given the above mentioned setup. I know that nginx has web server features. But I do not know if it makes sense to have the reverse proxy be its own VPS, given that the web server–in theory–would be separate from the app server.

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • Software architecture for two similar classes which require different input parameters for the same method

    - by I Like to Code
    I am writing code to simulate a supply chain. The supply chain can be simulated in either an intermediate stocking or a cross-docking configuration. So, I wrote two simulator objects IstockSimulator and XdockSimulator. Since the two objects share certain behaviors (e.g. making shipments, demand arriving), I wrote an abstract simulator object AbstractSimulator which is a parent class of the two simulator objects. The abstract simulator object has a method runSimulation() which takes an input parameter of class SimulationParameters. Up till now, the simulation parameters only contains fields that are common to both simulator objects, such as randomSeed, simulationStartPeriod and simulationEndPeriod. However, I now want to include fields that are specific to the type of simulation that is being run, i.e. an IstockSimulationParameters class for an intermediate stocking simulation, and a XdockSimulationParameters class for a cross-docking simulation. My current idea is take the method runSimulation() out of the AbstractSimulator class, but to put a runSimulation(IstockSimulationParameters) method in the IstockSimulator class, and a runSimulation(XdockSimulationParameters) method in the IstockSimulator class. I am worried however, that this approach will lead to code duplication. What should I do?

    Read the article

  • How to prevent Network Manager from auto creating network connection profiles with "available to everyone" by default

    - by airtonix
    We have several laptops at work which use Ubuntu 11.10 64bit. I have our Wifi Access Point requiring WPA2-EAP Authentication (backed by a LDAP server). I have the staff using these laptops when doing presentations by using the Guest Account. So by default when you have a wifi card, network manager will display available Wireless Access Points. So the logical course of action for a Novice(tm) user is to single left click the easy to use option in the Network Manager drop down list... At this point the Staff Member (who is logged in with the guest account) expects to just be able to connect and enter any authentication details if required. But because they are using the Guest account, they won't ever have admin permissions (nor do I want them to), and so PolKit kicks in with a request for admin authorisation. I solved this part by modifying the PolKit permissions required to allow all users to create System Network Connections... However, because these Staff members are logging onto the Wifi Access Point with Ldap Credentials and because the Network Manager is now saving those credentials as a System Connection, their password is available for the next guest user session (because system connection profiles are stored in /etc/NetworkManager/system-connections.d/* ). It creates system connections by default because "Available to all users" is ticked by default when you quickly connect to a new wifi access point. I want Network Manager to not tick this by default. This way I can revert the changes I made to Polkit and users network connection profiles will be purged when they log out.

    Read the article

  • PyQt application architecture

    - by L. De Leo
    I'm trying to give a sound structure to a PyQt application that implements a card game. So far I have the following classes: Ui_Game: this describes the ui of course and is responsible of reacting to the events emitted by my CardWidget instances MainController: this is responsible for managing the whole application: setup and all the subsequent states of the application (like starting a new hand, displaying the notification of state changes on the ui or ending the game) GameEngine: this is a set of classes that implement the whole game logic Now, the way I concretely coded this in Python is the following: class CardWidget(QtGui.QLabel): def __init__(self, filename, *args, **kwargs): QtGui.QLabel.__init__(self, *args, **kwargs) self.setPixmap(QtGui.QPixmap(':/res/res/' + filename)) def mouseReleaseEvent(self, ev): self.emit(QtCore.SIGNAL('card_clicked'), self) class Ui_Game(QtGui.QWidget): def __init__(self, window, *args, **kwargs): QtGui.QWidget.__init__(self, *args, **kwargs) self.setupUi(window) self.controller = None def place_card(self, card): cards_on_table = self.played_cards.count() + 1 print cards_on_table if cards_on_table <= 2: self.played_cards.addWidget(card) if cards_on_table == 2: self.controller.play_hand() class MainController(object): def __init__(self): self.app = QtGui.QApplication(sys.argv) self.window = QtGui.QMainWindow() self.ui = Ui_Game(self.window) self.ui.controller = self self.game_setup() Is there a better way other than injecting the controller into the Ui_Game class in the Ui_Game.controller? Or am I totally off-road?

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Mobile (Client) to Amazon S3 (Server) - Architecture

    - by wasabii
    let's start off with the problem statement: My iOS application has a login form. When the user logs in, a call is made to my API and access granted or denied. If access was granted, I want the user to be able to upload pictures to his account and/or manage them. As storage I've picked Amazon S3, and I figured it'd be a good idea to have one bucket called "myappphotos" for instance, which contains lots of folders. The folder names are hashes of a user's email and a secret key. So, every user has his own, unique folder in my Amazon S3 bucket. Since I've just recently started working with AWS, here's my question: What are the best practices for setting up a system like this? I want the user to be able to upload pictures directly to Amazon S3, but of course I cannot hard-code the access key. So I need my API to somehow talk to Amazon and request an access token of sorts - only for the particular folder that belongs to the user I'm making the request for. Can anyone help me out and/or guide me to some sources where a similar problem was addressed? Don't think I'm the first one and the amazon documentation is so extensive that I don't really know where to start looking. Thanks a lot!

    Read the article

  • Examples of different architecture methodologies

    - by Lane
    Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be described well in a book, but when you go to implement it, the subtleties and weaknesses of the theory become readily apparent.

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • How to get rid of auto-generated sequence number in network's device name in Windows?

    - by Piotr Dobrogost
    Every time one plugs in the same usb wireless adapter in a new usb port, Windows creates new network device with auto-generated sequence number which looks like this Wireless-N USB Network Adapter #2, Wireless-N USB Network Adapter #3, ... The name of a device is being displayed as part of network's information in Control Panel|Network Connections. How can I get rid of this sequence number? I found out device name which is displayed in network's information is kept in the FriendlyName REG_SZ value under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_[device specific string]\[usb port specific string] However when I try to modify this value I get error Cannot edit FriendlyName: Error writing the value's new contents. I tried to delete extra keys under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\USB\VID_13B1&PID_0029 but got Cannot delete KEY NAME: Error while deleting key. error. Trying to solve this problem I followed this answer but trying to change owner with Replace owner on subcontainers and objects option checked I got this error - Registry Editor could not set owner on the currently selected, or some of its subkeys. To find out which subkey is the source of problem I tried changing owner of each subkey. After successfully changing owner of Properites subkey I saw it has subkeys which were previously hidden. Now trying to change owner of these subkeys looks like this: Any idea how to delete these keys?

    Read the article

  • Growing Into Enterprise Architecture

    - by pat.shepherd
    I am writing this post as I am in an Enterprise Architecture class, specifically on the Oracle Enterprise Architecture Framework (OEAF).  I have been a long believer that SOA’s key strength is that it is the first IT approach that blends or unifies business and technology.  That is a common view and is certainly valid but is not completely true (or at least accurate).  As my personal view of EA is growing, I realize more than ever that doing EA is FAR MORE than creating a reference architecture, creating a physical architecture or picking a technology to standardize on.  Those are parts of the puzzle but not the whole puzzle by any stretch. I am now a firm believer that the various EA frameworks out there provide the rigor and structure required to allow the bridging of business strategy / vision to IT strategy / vision. The flow goes something like this: Business Strategy –> Business / Application / Information / Technology Architecture –> SOA Reference Architecture –> SOA Functional Architecture.  Governance is imbued throughout to help map, measure and verify the business-to-IT coherence. With those in place, then (and only then) can SOA fulfill it’s potential to be more that an integration strategy, more than a reuse strategy; but also a foundation for tying the results of IT to business vision. Fortunately, EA is a an ongoing process that it is never too late to get started with an understanding of frameworks such as TOGAF, FEA, or OEAF.  Also, EA is never ending in that it always needs to be apply, even once a full-blown Enterprise Architecture is established it needs to be constantly evolved.  For those who are getting deeper into EA as a discipline, there is plenty runway to grow as your company/customer begins to look more seriously at EA. I will close with a pointer to a Great Book I have recently read on this subject: Enterprise Architecture as Strategy (http://www.amazon.com/Enterprise-Architecture-Strategy-Foundation-Execution/dp/1591398398/ref=sr_1_1?ie=UTF8&s=books&qid=1268842865&sr=1-1)

    Read the article

  • Network Manager kicks off abruptly

    - by Vijay Selvaraj
    I have installed Ubuntu 10.10 and trying to connect with my ADSL Wireless broadband internet modem using Linksys WUSB600N receiver. The good news is the OS is able to detect my wifi network and I am able to hook to network over WPA authentication with basic settings. But the network goes off abruptly and never connects again until I reboot the machine. I have Windows 7 as dual boot on my machine. The same adapter works perfectly with Windows 7 but not in Ubuntu. Is there anything in need to tweak to make things working or do I need to try any other better network manager on Ubuntu?

    Read the article

  • How do I disable network connection at prelogin?

    - by ProGNOMmers
    --- This question is related to Ubuntu 12.10, since previous versions did not connect to network before login --- I had a bad boot today: the Ubuntu screen was blocked at startup time, after a green [OK] and a white blinking underscore. In recovery mode I figured out the problem: NetworkManager hung trying to connect to a wireless network that wasn't available anymore, and so I couldn't reach the prelogin level. Anyway: I really don't like that the pc connects to a network before the user logging in. How is it possible to disable it?

    Read the article

  • Configuring Network without Default Gateway

    - by Homayoon
    I'm trying to connect my desktop and laptop using an ethernet connection. I usually configure network from the command line but this time I decided to give Network Manager a try, so I went to Network Connections, and selected manual IP configuration. At first I left the default gateway field blank, since I don't need a default gateway. Turned out network manager doesn't let me save the connection unless I enter that field, but entering a phony gateway messes up with my Internet connection. Anyway to do this setup?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >