Search Results

Search found 13331 results on 534 pages for 'fluent interface'.

Page 114/534 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Shaping with shorewall complex shaper not work (or I don't understand principle of operation)

    - by strangeman
    I have router (Debian 6) with 2 network interfaces (and 1 virtual tun interface): eth0 - localnet, 192.168.1.0/24, router ip is 192.168.1.1 eth1 - internet tun0 - openvpn to central office. openvpn network - 10.1.0.0/24, central office network - 192.168.0.0/24 I need shape all traffic, which moves 192.168.1.0/24-192.168.0.1:6666 and 192.168.1.0/24<-192.168.0.1:6666, and restrict its speed to 200kbit. Now, I have this configuration, but its not work: tcdevices (set up interface parameters) #INTERFACE IN-BANDWITH OUT-BANDWIDTH eth0 100mbit 100mbit #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE tcrules (mark all traffic, which move on 6666 port) #MARK SOURCE DEST PROTO PORT(S) 1 0.0.0.0/0 0.0.0.0/0 tcp 6666 #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE tcclasses (shape all marked traffic) #INTERFACE MARK RATE CEIL PRIORITY OPTIONS eth0 1 200kbit 200kbit 2 eth0 255 9*full/10 full 1 default #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE Where is my mistake?

    Read the article

  • Route a specific user's traffic via VPN but still allow local networking

    - by wbg
    So, I want to route certain traffic via a VPN connection and the rest via my normal Internet connection. I want to run several different programs and most of them don't support binding to a specific network interface (tun0 in my case). I've managed to send a specific user's traffic via the VPN following the answers given here: iptables - Target to route packet to specific interface? But unfortunately, when I run a server that connects to the Internet and has a web interface running on a local IP (127.0.0.1/192.168.0.*), all the Internet traffic correctly goes via tun0, but I'm unable to connect to the web interface from a local IP as a different user. When I log in as the VPN-ified user, I can access services running on local IPs, but other users/machines can't access any servers I start. Can anyone point me in the right direction?

    Read the article

  • wireless clients not getting correct dhcp addresses

    - by szeli
    I apologise first if this is a stupid problem. I'm new to Cisco networking. I need some help with an existing configuration done by my vendor. Environment: 1. Core switch - Catalyst 6509e vlans configured: a. vlan 50 (wired clients) 10.0.50.x/24 interface IP 10.0.50.20 b. vlan 70 (wireless clients) 10.0.70.x/24 interface IP 10.0.70.20 c. vlan 192 (guest clients) 192.168.1.x/24 interface IP 192.168.1.20 d. trunk port for WLC native vlan 70 allowed vlan 50, 70, 192 2. Cisco 4402 WLC interfaces a. management untagged IP 10.0.70.10 b. ap-manager untagged IP 10.0.70.11 c. service-port n/a IP 192.168.10.1 d. virtual n/a IP 1.1.1.1 e. guestwlan vlan192 IP 192.168.1.100 3. Cisco AIR-LAP1142N-S-K9 LAP01 (WLAN local, interface: management) IP 10.0.70.21/24 GW 10.0.70.20 DHCP server 10.0.50.10 (scope 10.0.70.101 to 200) LAP02 (WLAN guest, interface: guestwlan) IP 192.168.1.21/24 GW 192.168.1.20 DHCP server 192.168.1.10 (scope 192.168.1.101 to 200) here's the problem, wireless clients connected to WLAN guest keep getting DHCP leases from WLAN local 10.0.50.10 (scope 10.0.70.101 to 200) can anyone please help? thanks!

    Read the article

  • Networking issues with WNR3500L

    - by ageis23
    When I try connecting to my wireless network it attempts to connect then gives up. There's something strange going on with the mac's. The eternet switch and all the vlan interfaces have a mac 00:FF:FF:FF:FF:FF. config 'switch' 'eth0' option 'vlan0' '2 3 4 8*' option 'vlan1' '0 8' option 'vlan2' '1 8' config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'lan' option 'type' 'bridge' option 'ifname' 'eth0.1' option 'proto' 'static' option 'netmask' '255.255.255.0' option 'ipaddr' '192.168.2.1' option 'ip6addr' '' option 'gateway' '192.168.1.253' option 'ip6gw' '' option 'dns' '' config 'interface' 'wan' option 'ifname' 'eth0' option 'proto' 'dhcp' option 'ipaddr' '192.168.1.8' option 'ip6addr' '' option 'netmask' '255.255.255.0' option 'gateway' '192.168.1.253' option 'ip6gw' '' option 'dns' '192.168.1.253' config 'interface' 'dmz' option 'ifname' 'eth0.2' option 'proto' 'static' option 'ipaddr' '192.168.0.1' option 'netmask' '255.255.255.0' Any help on this will be greatly appreciated! When I try setting the mac using macaddr it does nothing. It works perfectly fine when I turn the authentication off.

    Read the article

  • Secondary fallback/failover network on Cisco ASA

    - by tyranitar
    In my network there is a Cisco ASA 55x0 with "inside" interface (network 192.168.79.0/24) and "outside" interface (network 89.x.x.48/29) There is this nat rule: object network NAToutside nat (inside,outside) dynamic interface and the static route route outside 0.0.0.0 0.0.0.0 89.x.x.49 1 and all ACL rules. Now I have another new outside network by another ISP called "outside2", this network is already natted and the Cisco ASA in in the network 192.168.70.0/24. I would use this network as a fallback one. So I set the nat rule: object network NAToutside2 nat (inside,outside2) dynamic interface and the static route with a different metric route outside2 0.0.0.0 0.0.0.0 192.168.70.1 2 Clearly it doesn't work: when I disconnect the outside ethernet cable no workstation can connect to the Internet throught the outside2 network... What do I need more?

    Read the article

  • A really unique case of Load Balancing

    - by Shamshun
    I have an internet connection which has limited up/down bandwidth per IP address. What I want to do is to get multiple IP addresses on a "single" LAN interface, and use a load balancer to distribute traffic through them. I was successful at getting 100 ip addresses on a single interface. my problem is that Linux and Windows use the first ip address of an interface by default, so my bandwidth is not increased. I would appreciate if someone tells me what Load Balancing software has the ability to distribute load between multiple IPs on a single interface. I have tried to do so on both Win7 and Backtrack-LinuxR2

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • How to negotiate with software vendors who do not follow HL7 standards

    - by Peter Turner
    Take, for instance the "", I'd hope that anyone who has spent any time in dealing with HL7 messages knows that the "" signifies that something should be deleted. "" is not an empty string, it's not a filler etc... But occasionally, one may meet a vendor who persists in sending "" instead of just sending nothing at all. Since, I work for a small business and have an extremely flexible HL7 interface, I can ignore ""'s in received messages. But these things are adding up. Some vendors like to send custom formatted fields with psuedo-components that they leave others to interpret themselves. Some vendors send all their information in note segments and assume you're going to only show users the information they send in a monospace font. Some vendors even have the audacity to send Carriage Return Line Feeds at the end of each line of a file interface. Some vendors absolutely refuse to send decimal numbers and in-so-doing refuse to send any numbers. So, with all this crippling humanity against the simple plastic software man, how does one bend without breaking*? Or better yet, how does one fight back and still make money? *my answer is usually to create an interface for the interface and keep the HL7 processing pure, but I don't think this is the best solution

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • Wireless with WEP extremely slow on an Acer Timeline 4810T with a Centrino Wireless-N 1000

    - by noq38
    I've upgraded an Acer Timeline 4810T to Ubuntu 11.10. Everything works fine except for the darn wireless interface (network manager). I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. This was no problem in 11.04 and prior. Is there a simple solution for this? Any suggestions? Here's more hardware information. Hopefully this helps to debug the network issue: sudo lshw -class network *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:3c:5e:e0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-13-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:d2400000-d2401fff lspci 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no Many thanks for your help! I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. Any suggestions?

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Can I use access used by Visual Basic for building a database [on hold]

    - by user3413537
    I am the only programmer where I work (summer job) and I am a student with only a few years of programming experience. So I was asked to build a database and I am very excited about this project because hopefully I can learn a lot from this. Using this database my manager is supposed to be able to assign work (dealing with businesses) to different people within the company using an interface (all workers have a shared drive). When workers are done with that paperwork related to the business, they can check off that its done, add comments at the bottom of the interface, and then move on to the next business. The only experience I've had with databases is some querying with SQL, and I've built GUI interfaces with JAVA. The information on the interface will be populated from Excel so workers know what businesses they are dealing with. I've done some research and I believe the best way to build this would be building a GUI using Microsoft Visual Studio (Visual Basic) first, then figuring out a way to populate the Interface from Excel. Also because the data is pretty straight forward and not complicated I will be using MS Access to store and track the database. I know this won't be easy, but for all you geniuses out there, is this on the right path? Thanks.

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

  • Windows Live SkyDrive: How To Move or Copy Files Between Folders

    - by Gopinath
    Microsoft has very simple and easy to use interface to move files between folders in Windows Operating system. But their own cloud storage service,Windows Live SkyDrive, complicated these simple and daily used operations. We need a guide to figure out how to perform basic copy/move operations. Couple of years ago we wrote about moving files between folders in old version of SkyDrive but the guide does not hold good today as SkyDrive has gone through many user interface changes in the recent past. Today one of our readers asked us how to move/copy files in the latest version of SkyDrive and here are the steps to be followed 1. Login to your Windows Live SkyDrive 2. Select the file you want to Move or Copy by clicking on the information icon (see 2 in below image) 3. After selecting the information icon, expand Information section displayed on the right side panel to access Move and Copy options (see 3 in the below image). 4. To move the selected file to another folder, select Move option and Sky Drive will guide you through folder selection user interface for choosing the target folder. 5. Once you navigate to the target folder where you want to move the file click on “Move this file into <<Target Folder>>”. 6. You are done. Dear Microsoft, SkyDrive provides us tonnes of free storage but please make it’s user interface a bit better so that we don’t need to write guides to perform basic operations. Hope you listen to your customers. This article titled,Windows Live SkyDrive: How To Move or Copy Files Between Folders, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • Cannot ping router with a static IP assigned?

    - by Uriah
    Alright. I am running Ubuntu LTS 12.04 and am trying to configure a local caching/master DNS server so I am using Bind9. First, here are some things via default DHCP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # The primary network interface - STATIC #auto eth0 #iface eth0 inet static # address 192.168.2.113 # netmask 255.255.255.0 # network 192.168.2.0 # broadcast 192.168.2.255 # gateway 192.168.2.1 # dns-search uclemmer.net # dns-nameservers 192.168.2.113 8.8.8.8 /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 search uclemmer.net ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1067 errors:0 dropped:0 overruns:0 frame:0 TX packets:2504 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:153833 (153.8 KB) TX bytes:214129 (214.1 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:915 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:71643 (71.6 KB) TX bytes:71643 (71.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. 64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.368 ms 64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.224 ms 64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.216 ms 64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.237 ms --- 192.168.2.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.216/0.261/0.368/0.063 ms ping -c 4 google.com PING google.com (74.125.134.102) 56(84) bytes of data. 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=1 ttl=48 time=15.1 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=2 ttl=48 time=11.4 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=3 ttl=48 time=11.6 ms 64 bytes from www.google-analytics.com (74.125.134.102): icmp_req=4 ttl=48 time=11.5 ms --- google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 11.488/12.465/15.118/1.537 ms ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.103 As you can see, with DHCP everything seems to work fine. Now, here are things with static IP: /etc/network/interfaces cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # The primary network interface - STATIC auto eth0 iface eth0 inet static address 192.168.2.113 netmask 255.255.255.0 network 192.168.2.0 broadcast 192.168.2.255 gateway 192.168.2.1 dns-search uclemmer.net dns-nameservers 192.168.2.1 8.8.8.8 I have tried dns-nameservers in various combos of *.2.1, *.2.113, and other reliable, public nameservers. /etc/resolv.conf cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.2.1 nameserver 8.8.8.8 search uclemmer.net Obviously, when I change the nameservers in the /etc/network/interfaces file, the nameservers change here too. ifconfig ifconfig eth0 Link encap:Ethernet HWaddr 00:14:2a:82:d4:9e inet addr:192.168.2.113 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::214:2aff:fe82:d49e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1707 errors:0 dropped:0 overruns:0 frame:0 TX packets:2906 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:226230 (226.2 KB) TX bytes:263497 (263.4 KB) Interrupt:23 Base address:0x8800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:985 errors:0 dropped:0 overruns:0 frame:0 TX packets:985 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:78625 (78.6 KB) TX bytes:78625 (78.6 KB) ping ping -c 4 192.168.2.1 PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data. --- 192.168.2.1 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms ping -c 4 google.com ping: unknown host google.com Lastly, here are my bind zone files: /etc/bind/named.conf.options cat /etc/bind/named.conf.options options { directory "/etc/bind"; // // // query-source address * port 53; notify-source * port 53; transfer-source * port 53; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { // My local 192.168.2.113; // Comcast 75.75.75.75; 75.75.76.76; // Google 8.8.8.8; 8.8.4.4; // DNSAdvantage 156.154.70.1; 156.154.71.1; // OpenDNS 208.67.222.222; 208.67.220.220; // Norton 198.153.192.1; 198.153.194.1; // Verizon 4.2.2.1; 4.2.2.2; 4.2.2.3; 4.2.2.4; 4.2.2.5; 4.2.2.6; // Scrubit 67.138.54.100; 207.255.209.66; }; // // // //allow-query { localhost; 192.168.2.0/24; }; //allow-transfer { localhost; 192.168.2.113; }; //also-notify { 192.168.2.113; }; //allow-recursion { localhost; 192.168.2.0/24; }; //======================================================================== // If BIND logs error messages about the root key being expired, // you will need to update your keys. See https://www.isc.org/bind-keys //======================================================================== dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local cat /etc/bind/named.conf.local // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; zone "example.com" { type master; file "/etc/bind/zones/db.example.com"; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.2.168.192.in-addr.arpa"; /etc/bind/zones/db.example.com cat /etc/bind/zones/db.example.com ; ; BIND data file for example.com interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 192.168.2.113 @ IN AAAA ::1 @ IN MX 10 yossarian.example.com. ; yossarian IN A 192.168.2.113 router IN A 192.168.2.1 printer IN A 192.168.2.200 ; ns01 IN CNAME yossarian.example.com. www IN CNAME yossarian.example.com. ftp IN CNAME yossarian.example.com. ldap IN CNAME yossarian.example.com. mail IN CNAME yossarian.example.com. /etc/bind/zones/db.2.168.192.in-addr.arpa cat /etc/bind/zones/db.2.168.192.in-addr.arpa ; ; BIND reverse data file for 2.168.192.in-addr interface ; $TTL 604800 @ IN SOA yossarian.example.com. root.example.com. ( 1343171970 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS yossarian.example.com. @ IN A 255.255.255.0 ; 113 IN PTR yossarian.example.com. 1 IN PTR router.example.com. 200 IN PTR printer.example.com. ip route ip route default via 192.168.2.1 dev eth0 metric 100 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.113 I can SSH in to the machine locally at *.2.113 or at whatever address is dynamically assigned when in DHCP "mode". *2.113 is in my router's range and I have ports open and forwarding to the server. Pinging is enabled on the router too. I briefly had a static configuration working but it died after the first reboot. Please let me know what other info you might need. I am beyond frustrated/baffled.

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • Question about separating game core engine from game graphics engine...

    - by Conrad Clark
    Suppose I have a SquareObject class, which implements IDrawable, an interface which contains the method void Draw(). I want to separate drawing logic itself from the game core engine. My main idea is to create a static class which is responsible to dispatch actions to the graphic engine. public static class DrawDispatcher<T> { private static Action<T> DrawAction = new Action<T>((ObjectToDraw)=>{}); public static void SetDrawAction(Action<T> action) { DrawAction = action; } public static void Dispatch(this T Obj) { DrawAction(Obj); } } public static class Extensions { public static void DispatchDraw<T>(this object Obj) { DrawDispatcher<T>.DispatchDraw((T)Obj); } } Then, on the core side: public class SquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } And on the graphics side: public static class SquareRender{ //stuff here public static void Initialize(){ DrawDispatcher<SquareObject>.SetDrawAction((Square)=>{//my square rendering logic}); } } Do this "pattern" follow best practices? And a plus, I could easily change the render scheme of each object by changing the DispatchDraw parameter, as in: public class SuperSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<SquareObject>(); } #endregion } public class RedSquareObject: GameObject, IDrawable { #region Interface public void Draw() { this.DispatchDraw<RedSquareObject>(); } #endregion } RedSquareObject would have its own render method, but SuperSquareObject would render as a normal SquareObject I'm just asking because i do not want to reinvent the wheel, and there may be a design pattern similar (and better) to this that I may be not acknowledged of. Thanks in advance!

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • How do I connect to Ubuntu One after changing the password?

    - by rumtscho
    I changed my password for Ubuntu One using the Web interface, and added a new computer. Since then, the old computer does not synchronize with Ubuntu One. It doesn't show any error messages or such, but files uploaded from the web interface or changed on the newly added computer don't appear/change on the old computer. I guess that it can't connect because it is still using the old password. The problem is that I can't find an interface to change the password the client is using to connect to the service. The "manage account" option opens the Web interface. I looked into the keyring, and found the key for Ubuntu One, but there I only see an encrypted version of the password, so I can't change it there. So what is the correct way to tell my client that my account password has changed? Edit this is what I see when I open Preferences -- Ubuntu One. Is there something wrong with it? It also stubbornly insists that it has successfully synchronized. But the files I have added from other computers are not in my Ubuntu One folder.

    Read the article

  • wireless is disabled by hardware lenovo 3000g430

    - by sudheer
    sir i have problem with my wifi switch sir please tell me solution for my problem (wifi is disabled by hardware). output of sudo lshw -C network is sudo] password for sudheer: *-network DISABLED description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: eth2 version: 01 serial: 00:21:00:72:3a:93 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:19 memory:f4700000-f4703fff *-network description: Ethernet interface product: NetLink BCM5906M Fast Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 02 serial: 00:1e:68:ad:24:0b size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=sb v3.04 ip=172.16.52.79 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:47 memory:f4600000-f460ffff output of iwconfig is lo no wireless extensions. eth2 IEEE 802.11 Access Point: Not-Associated Link Quality:5 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. sudheer@sudheer:~$ sudo iwlistscanning sudo: iwlistscanning: command not found ***sudheer@sudheer:~$ sudo iwlist scanning*** lo Interface doesn't support scanning. eth2 Failed to read scan data : Invalid argument eth0 Interface doesn't support scanning.

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Is it a good programming practice to have a class with several .h files?

    - by Jim Thio
    I suppose the class have several different interfaces. Some it shows to some class, some it shows to other classes. Are there any good reason for that? One thing I can think of is with one .h per class, interface would either be public or private. What about if I want some interface to be available to some friends' class and some interface to be truly public? Sample: @interface listNewController:BadgerStandardViewViewController <UITableViewDelegate,UITableViewDataSource,UITextFieldDelegate,NSFetchedResultsControllerDelegate,UIScrollViewDelegate,UIGestureRecognizerDelegate> { } @property (nonatomic) IBOutlet NSFetchedResultsController *FetchController; @property (nonatomic) IBOutlet UITextField *searchBar1; @property (nonatomic) IBOutlet UITableView *tableViewA; + (listNewController *) singleton; //For Easier Access -(void)collapseAll; -(void)TitleViewClicked:(TitleView *) theTitleView; -(NSUInteger) countOfEachSection:(NSInteger)section; @end Many of those public properties and function are only ever called by just one other classes. I wonder why I need to make them available to many classes. It's in Objective-c by the way

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >