Search Results

Search found 2093 results on 84 pages for 'logical'.

Page 79/84 | < Previous Page | 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Problem with RAID5 (mdadm) - disk detached

    - by poscaman
    Having these lines in /var/log/syslog Apr 18 16:53:05 Server kernel: [4487878.816036] ata4: EH in SWNCQ mode,QC:qc_active 0x1 sactive 0x1 Apr 18 16:53:05 Server kernel: [4487878.816058] ata4: SWNCQ:qc_active 0x1 defer_bits 0x0 last_issue_tag 0x0 Apr 18 16:53:05 Server kernel: [4487878.816059] dhfis 0x1 dmafis 0x1 sdbfis 0x0 Apr 18 16:53:05 Server kernel: [4487878.816093] ata4: ATA_REG 0x40 ERR_REG 0x0 Apr 18 16:53:05 Server kernel: [4487878.816108] ata4: tag : dhfis dmafis sdbfis sacitve Apr 18 16:53:05 Server kernel: [4487878.816125] ata4: tag 0x0: 1 1 0 1 Apr 18 16:53:05 Server kernel: [4487878.816150] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 frozen Apr 18 16:53:05 Server kernel: [4487878.816178] ata4.00: failed command: WRITE FPDMA QUEUED Apr 18 16:53:05 Server kernel: [4487878.816199] ata4.00: cmd 61/08:00:00:88:e0/00:00:e8:00:00/40 tag 0 ncq 4096 out Apr 18 16:53:05 Server kernel: [4487878.816200] res 40/00:00:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Apr 18 16:53:05 Server kernel: [4487878.816253] ata4.00: status: { DRDY } Apr 18 16:53:05 Server kernel: [4487878.816272] ata4: hard resetting link Apr 18 16:53:05 Server kernel: [4487878.816274] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:06 Server kernel: [4487879.676029] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:07 Server kernel: [4487880.416749] ata4.00: n_sectors mismatch 3907029168 != 268435455 Apr 18 16:53:07 Server kernel: [4487880.416752] ata4.00: revalidation failed (errno=-19) Apr 18 16:53:07 Server kernel: [4487880.416773] ata4.00: limiting speed to UDMA/133:PIO2 Apr 18 16:53:11 Server kernel: [4487884.676024] ata4: hard resetting link Apr 18 16:53:11 Server kernel: [4487884.676027] ata4: nv: skipping hardreset on occupied port Apr 18 16:53:12 Server kernel: [4487885.144032] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:12 Server kernel: [4487885.240185] ata4.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80) Apr 18 16:53:12 Server kernel: [4487885.240190] ata4.00: revalidation failed (errno=-5) Apr 18 16:53:12 Server kernel: [4487885.240210] ata4.00: disabled Apr 18 16:53:17 Server kernel: [4487890.144023] ata4: hard resetting link Apr 18 16:53:17 Server kernel: [4487891.024033] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 18 16:53:17 Server kernel: [4487891.033357] ata4.00: ATA-8: WDC WD20EARS-00S8B1, 80.00A80, max UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.033360] ata4.00: 3907029168 sectors, multi 1: LBA48 NCQ (depth 31/32) Apr 18 16:53:17 Server kernel: [4487891.048347] ata4.00: configured for UDMA/133 Apr 18 16:53:17 Server kernel: [4487891.048361] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Apr 18 16:53:17 Server kernel: [4487891.048365] sd 3:0:0:0: [sdc] Sense Key : Aborted Command [current] [descriptor] Apr 18 16:53:17 Server kernel: [4487891.048369] Descriptor sense data with sense descriptors (in hex): Apr 18 16:53:17 Server kernel: [4487891.048371] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048378] 00 00 00 00 Apr 18 16:53:17 Server kernel: [4487891.048382] sd 3:0:0:0: [sdc] Add. Sense: No additional sense information Apr 18 16:53:17 Server kernel: [4487891.048385] sd 3:0:0:0: [sdc] CDB: Write(10): 2a 00 e8 e0 88 00 00 00 08 00 Apr 18 16:53:17 Server kernel: [4487891.048393] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048420] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048440] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048458] end_request: I/O error, dev sdc, sector 3907028992 Apr 18 16:53:17 Server kernel: [4487891.048477] md: super_written gets error=-5, uptodate=0 Apr 18 16:53:17 Server kernel: [4487891.048482] raid5: Disk failure on sdc, disabling device. Apr 18 16:53:17 Server kernel: [4487891.048483] raid5: Operation continuing on 3 devices. Apr 18 16:53:17 Server kernel: [4487891.048525] ata4: EH complete Apr 18 16:53:17 Server kernel: [4487891.048554] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048576] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048596] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048615] sd 3:0:0:0: [sdc] READ CAPACITY(16) failed Apr 18 16:53:17 Server kernel: [4487891.048617] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048620] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048624] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048643] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048663] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048681] sd 3:0:0:0: [sdc] READ CAPACITY failed Apr 18 16:53:17 Server kernel: [4487891.048683] sd 3:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Apr 18 16:53:17 Server kernel: [4487891.048685] sd 3:0:0:0: [sdc] Sense not available. Apr 18 16:53:17 Server kernel: [4487891.048689] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048709] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048800] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.048860] sd 3:0:0:0: rejecting I/O to offline device Apr 18 16:53:17 Server kernel: [4487891.049028] sd 3:0:0:0: [sdc] Asking for cache data failed Apr 18 16:53:17 Server kernel: [4487891.049048] sd 3:0:0:0: [sdc] Assuming drive cache: write through Apr 18 16:53:17 Server kernel: [4487891.049071] sdc: detected capacity change from 2000398934016 to 0 Apr 18 16:53:17 Server kernel: [4487891.049080] ata4.00: detaching (SCSI 3:0:0:0) Apr 18 16:53:18 Server kernel: [4487891.061149] sd 3:0:0:0: [sdc] Stopping disk Apr 18 16:53:18 Server kernel: [4487891.485492] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.485496] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.485500] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.485502] disk 1, o:0, dev:sdc Apr 18 16:53:18 Server kernel: [4487891.485504] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.485506] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.497014] RAID5 conf printout: Apr 18 16:53:18 Server kernel: [4487891.497016] --- rd:4 wd:3 Apr 18 16:53:18 Server kernel: [4487891.497018] disk 0, o:1, dev:sdb Apr 18 16:53:18 Server kernel: [4487891.497019] disk 2, o:1, dev:sdd Apr 18 16:53:18 Server kernel: [4487891.497021] disk 3, o:1, dev:sde Apr 18 16:53:18 Server kernel: [4487891.838719] scsi 3:0:0:0: Direct-Access ATA WDC WD20EARS-00S 80.0 PQ: 0 ANSI: 5 Apr 18 16:53:18 Server kernel: [4487891.838886] sd 3:0:0:0: Attached scsi generic sg3 type 0 Apr 18 16:53:18 Server kernel: [4487891.838911] sd 3:0:0:0: [sdf] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Apr 18 16:53:18 Server kernel: [4487891.838964] sd 3:0:0:0: [sdf] Write Protect is off Apr 18 16:53:18 Server kernel: [4487891.838967] sd 3:0:0:0: [sdf] Mode Sense: 00 3a 00 00 Apr 18 16:53:18 Server kernel: [4487891.838988] sd 3:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 18 16:53:20 Server kernel: [4487891.839147] sdf: unknown partition table Apr 18 16:53:20 Server kernel: [4487893.130026] sd 3:0:0:0: [sdf] Attached SCSI disk Right now, i'm unable to do anything on /dev/sdc. Is there any way to try to re-attach it? I don't want to power-down the server unless absolutely necessary System: Debian Stable 2.6.32-5-amd64 mdadm version 3.1.4-1+8efb9d1 cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdc[4](F) sde[3] sdd[2] 5860543488 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU] unused devices: <none> mdadm --examine --scan ARRAY /dev/md0 UUID=1a7744b5:912ec7af:f82a9565:e3b453b4

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • If Nvidia Shield can stream a game via WiFi (~150-300Mbps), where is the 1-10Gbps wired streaming?

    - by Enigma
    Facts: It is surprising and uncharacteristic that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. 150-300mbps WiFi is in no way superior to a 1000mbps+ LAN connection aside from well wireless mobility. Throughout time, (since the internet was created) wired services have **always come first yet in this particular case, the opposite seems to be true. We had wired internet first, wired audio streaming, and wired video streaming all before their wireless counterparts. Why? Largely because the wireless bandwidth was and is inferior. Even today despite being significantly better and capable of a lot more, it is still inferior to a wired connection. Situation: Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable to me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. While this secondary project will provide a wired connection, it still shouldn't be necessary to purchase a Shield to have this benefit. Not only this but Intel's WiDi claims game streaming support as well - wirelessly. Where is the wired streaming? All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I should be able to stream to my second desktop or my laptop both of which by hardware comparison are superior to the Shield. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? Reiteration of questions: Is there a technical reason (non marketing) for why Nvidia opted to bottleneck the streaming service with a wireless connection limiting the resolution to 720p and introducing intermittent video choppiness when on a wired connection one could achieve, presumably, 1080p with significantly less or zero choppiness? Is there anything limiting developers from creating a PC/Desktop application emulating the same H.264 decoding functionality that circumvents the need to get an Nvidia Shield altogether? (It is not a matter of being too cheap to support Nvidia - I have many Nvidia cards that aren't being used. One should not have to purchase specialty hardware when = hardware already exists) Same questions go for Intel Widi also. I am just utterly perplexed that there are wireless live streaming solution and yet no wired. How on earth can wireless be the goto transmission medium? Is there another solution that takes advantage of H.264 video compression allowing live streaming over a wired connection? (*) - Perhaps this isn't the first but afaik it is the first complete package. (**) - I cant back that up with hard evidence/links but someone probably could. Edit: Maybe this will be the solution I am looking for but I still find it hard to believe that they would be the first and after wireless solutions already exist. In-home Streaming You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV! - http://store.steampowered.com/livingroom/SteamOS/

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • My linux server takes more than an hour to boot. Suggestions?

    - by jamieb
    I am building a CentOS 5.4 system that boots off a compact flash card using a card reader that emulates an IDE drive. It literally takes about an hour to boot. The ultra-slow part occurs when Grub is loading the kernel. Once that's done, the rest of the boot process only takes about a minute to get to a login prompt. Does anyone have any suggestions? I suspect that it may have to do with UDMA. Everything IDE-related in my BIOS seems to checkout. The read performance hdparm is telling me 1.77 MB/s. Ouch! (But even at that rate, it still shouldn't take an hour to decompress and load the kernel) [root@server ~]# hdparm -tT /dev/hdc /dev/hdc: Timing cached reads: 2444 MB in 2.00 seconds = 1222.04 MB/sec Timing buffered disk reads: 6 MB in 3.39 seconds = 1.77 MB/sec Trying to enable DMA is a no-go though: [root@server ~]# hdparm -d1 /dev/hdc /dev/hdc: setting using_dma to 1 (on) HDIO_SET_DMA failed: Operation not permitted using_dma = 0 (off) Here's some command outputs that might help: System [root@server ~]# uname -a Linux server.localdomain 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:47:32 EDT 2009 i686 i686 i386 GNU/Linux PCI info: [root@server ~]# lspci -v 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) Subsystem: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation 82945G/GZ Integrated Graphics Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at fdf00000 (32-bit, non-prefetchable) [size=512K] I/O ports at ff00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fdf80000 (32-bit, non-prefetchable) [size=256K] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 2 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at fe00 [size=32] 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 Flags: bus master, medium devsel, latency 0, IRQ 17 I/O ports at fd00 [size=32] 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at fc00 [size=32] 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at fb00 [size=32] 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fdfff000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fdd00000 Capabilities: [50] #0d [0000] 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) (prog-if 80 [Master]) Subsystem: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 17 I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at f800 [size=16] Capabilities: [70] Power Management version 2 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) Subsystem: Intel Corporation 82801G (ICH7 Family) SMBus Controller Flags: medium devsel, IRQ 17 I/O ports at 0500 [size=32] 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 18 I/O ports at de00 [size=256] Memory at fdeff000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 17 I/O ports at dc00 [size=256] Memory at fdefe000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 19 I/O ports at da00 [size=256] Memory at fdefd000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 hdparm ouput: [root@server ~]# hdparm /dev/hdc /dev/hdc: multcount = 0 (off) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 8146/16/63, sectors = 8211168, start = 0 [root@server ~]# hdparm -I /dev/hdc /dev/hdc: ATA device, with non-removable media Model Number: InnoDisk Corp. - iCF4000 4GB Serial Number: 20091023AACA70000753 Firmware Revision: 081107 Standards: Supported: 5 Likely used: 6 Configuration: Logical max current cylinders 8146 8146 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 8211168 LBA user addressable sectors: 8211168 device size with M = 1024*1024: 4009 MBytes device size with M = 1000*1000: 4204 MBytes (4 GB) Capabilities: LBA, IORDY(can be disabled) Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 2 Current = 2 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * Power Management feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * CFA feature set * Mandatory FLUSH_CACHE HW reset results: CBLID- above Vih Device num = 0 CFA power mode 1: enabled and required by some commands Maximum current = 100ma Checksum: correct

    Read the article

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Motherboard/PSU crippling USB and Sata

    - by celebdor
    I very recently bought a new desktop computer. The motherboard is: Z77MX-D3H and the power supply is ocz zs series 550w. The issue I have is that once I boot to the operating system (I have tried with fedora and Ubuntu with kernels 2.6.38 - 3.4.0), my hard drive (2.5" Magnetic) occasionally makes a power switch noise and it resets. Needless to say, when this drive is the OS drive, the OS crashes. I also have a SSD that works fine with the same OS configurations, but if I have the magnetic hard drive attached as second drive, it works erratically and the reconnects result in corrupted data. I also noticed that whenever I plug an external hard drive USB2.0 or USB3.0 to the computer the issue with the reconnects is even worse: [ 52.198441] sd 7:0:0:0: [sdc] Spinning up disk... [ 57.955811] usb 4-3: USB disconnect, device number 3 [ 58.023687] .ready [ 58.023914] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.023919] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.023932] sd 7:0:0:0: [sdc] Sense not available. [ 58.024061] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024063] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024064] sd 7:0:0:0: [sdc] Sense not available. [ 58.024099] sd 7:0:0:0: [sdc] Write Protect is off [ 58.024101] sd 7:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 58.024135] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024137] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024400] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.024402] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024405] sd 7:0:0:0: [sdc] Sense not available. [ 58.024448] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024450] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024451] sd 7:0:0:0: [sdc] Sense not available. [ 58.024469] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024471] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024472] sd 7:0:0:0: [sdc] Attached SCSI disk [ 58.407725] usb 4-3: new SuperSpeed USB device number 4 using xhci_hcd [ 58.424921] scsi8 : usb-storage 4-3:1.0 [ 59.424185] scsi 8:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 59.424406] scsi 8:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 59.425098] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 59.425176] ses 8:0:0:1: Attached Enclosure device [ 59.425248] ses 8:0:0:1: Attached scsi generic sg3 type 13 [ 61.845836] sd 8:0:0:0: [sdc] 976707584 512-byte logical blocks: (500 GB/465 GiB) [ 61.845838] sd 8:0:0:0: [sdc] 4096-byte physical blocks [ 61.846336] sd 8:0:0:0: [sdc] Write Protect is off [ 61.846338] sd 8:0:0:0: [sdc] Mode Sense: 47 00 10 08 [ 61.846718] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.846720] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.848105] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.848106] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.857147] sdc: sdc1 [ 61.858915] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.858916] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.858918] sd 8:0:0:0: [sdc] Attached SCSI disk [ 69.875809] usb 4-3: USB disconnect, device number 4 [ 70.275816] usb 4-3: new SuperSpeed USB device number 5 using xhci_hcd [ 70.293063] scsi9 : usb-storage 4-3:1.0 [ 71.292257] scsi 9:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 71.292505] scsi 9:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 71.293527] sd 9:0:0:0: Attached scsi generic sg2 type 0 [ 71.293668] ses 9:0:0:1: Attached Enclosure device [ 71.293758] ses 9:0:0:1: Attached scsi generic sg3 type 13 [ 73.323804] usb 4-3: USB disconnect, device number 5 [ 101.868078] ses 9:0:0:1: Device offlined - not ready after error recovery [ 101.868124] ses 9:0:0:1: Failed to get diagnostic page 0x50000 [ 101.868131] ses 9:0:0:1: Failed to bind enclosure -19 [ 101.868288] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868292] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868296] sd 9:0:0:0: [sdc] Sense not available. [ 101.868428] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868434] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868439] sd 9:0:0:0: [sdc] Sense not available. [ 101.868468] sd 9:0:0:0: [sdc] Write Protect is off [ 101.868473] sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 101.868580] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868584] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868845] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868849] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868854] sd 9:0:0:0: [sdc] Sense not available. [ 101.868894] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868898] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868903] sd 9:0:0:0: [sdc] Sense not available. [ 101.868961] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868966] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868969] sd 9:0:0:0: [sdc] Attached SCSI disk Now, if I plug the same drive to the powered usb 2.0 hub of my monitor, the issue is not reproduced (at least on a 20h long operation). Also the issue of the usb reconnects is less frequent if the hard drive is plugged before I switch on the computer. Does anybody have some advice as to what I could do? Which is the faulty part/s that I should replace? As for me, I really don't know if to point my finger to the PSU or the Motherboard (I have updated to the latest firmware and checked the BIOS settings several times). EDIT: The reconnects are happening both in the Sata connected drives and the USBX connected drives.

    Read the article

  • Own DataFormWebPart: Unable to display this Web Part.

    - by user307852
    Hi, What I want is to display a paginable list from my custom data source. All came from the fact that I needed some logical relationships between seach scopes and some dates. I considered ContentQueryWebpart, but CAML is too complicated as I have plenty of different conditions, CoreResults webpart is too limited as it uses KeyWordQuery so I inherited from DataFormWebPart to display my own custom search results from my own custom xml source. Thats the best way i am aware of... I knew I needed some configuration to write in my pageLayout like MyWebParts:MyCustomWebPart etc so I went to Designer, put DataFormWebPart on a page, linked it to list of pages, and set multiple item view and that gave me some configuration.. Firstly i checked if DataFormWebPart works when put on pageLayout and it sucessfully displayed my pages list in pages library. But that was not what I wanted, i needed to set my own XML data source... I then changed the configuration of WebPartPages:DataFormWebPart to match mine and detach from any listName or listId and I change the XSL tag to much simplier to display all results and now I am getting: Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Microsoft SharePoint Foundation-compatible HTML editor such as Microsoft SharePoint Designer. If the problem persists, contact your Web server administrator. and there is no way to know what is wrong:/ The webpart public class MyCustomWebPart : DataFormWebPart{ public override void DataBind() { ...... XmlDataSource source = new XmlDataSource(); source.Data = doc.InnerXml; this.DataSource=source; base.DataBind(); } } The doc XML: <dsQueryResponse><Rows><Row URL="http://myserver/sites/sc/myDoc.doc" TITLE="Specification.doc" AUTHOR="System.String[]" /></Rows></dsQueryResponse> The webpart is in PageLayout, inserted without webpartzone: <MyWebParts:MyCustomWebPart runat="server" Description="" ListDisplayName="" PartOrder="2" HelpLink="" AllowRemove="True" IsVisible="True" AllowHide="True" UseSQLDataSourcePaging="True" ExportControlledProperties="True" DataSourceID="" Title="" ViewFlag="0" NoDefaultStyle="TRUE" AllowConnect="True" FrameState="Normal" PageSize="10" PartImageLarge="" AsyncRefresh="True" ExportMode="All" Dir="Default" DetailLink="" ShowWithSampleData="False" FrameType="None" PartImageSmall="" IsIncluded="True" SuppressWebPartChrome="False" AllowEdit="True" ManualRefresh="False" ChromeType="None" AutoRefresh="False" AutoRefreshInterval="60" AllowMinimize="True" ViewContentTypeId="" InitialAsyncDataFetch="False" MissingAssembly="Cannot import this Web Part." HelpMode="Modeless" ListUrl="" ID="g_c2180fb9_c667_42f3_aab3_c3340cb0ac5a" ConnectionID="00000000-0000-0000-0000-000000000000" AllowZoneChange="True" IsIncludedFilter="" __MarkupType="vsattributemarkup" __WebPartId="{C2233FB9-C667-42F3-AAB3-C334223C5A}" __AllowXSLTEditing="true" WebPart="true" Height="" Width=""> <Xsl> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/> <xsl:template match="/"> <xmp> <xsl:copy-of select="*"/> </xmp> </xsl:template> </xsl:stylesheet> </Xsl> <DataSources> <SharePoint:SPDataSource runat="server" DataSourceMode="List" SelectCommand="<View></View>" UpdateCommand="" InsertCommand="" DeleteCommand="" UseInternalName="True" ID="spdatasource3"> <SelectParameters> <asp:Parameter DefaultValue="0" Name="StartRowIndex"></asp:Parameter><asp:Parameter DefaultValue="0" Name="nextpagedata"> </asp:Parameter><asp:Parameter DefaultValue="10" Name="MaximumRows"></asp:Parameter> </SelectParameters> </SharePoint:SPDataSource> </DataSources> </MyWebParts:MyCustomWebPart>

    Read the article

  • WPF: Focus in a Window and UserControl

    - by Echilon
    I'm trying to get a UserControl to tab properly and am baffled. The logical tree looks like this. |-Window -Grid -TabControl -TabItem -StackPanel -MyUserControl |-StackPanel -GroupBox -Grid -ComboBox -Textbox1 -Textbox2 Everything works fine, except when the visibility converter for the ComboBox returns Visibility.Collapsed (don't allow user to change database mode), then when textbox1 is selected, instead of being able to tab through the controls in the UserControl, the focus shifts to a button declared at the bottom of the window. Nothing else apart from the controls displayed has TabIndex or FocusManager properties set. I'm banging my head against a brick wall and I must be missing something. I've tried IsFocusScope=True/False, played with FocusedElement and nothing works if that ComboBox is invisible (Visibility.Collapsed). <Window x:Class="MyNamespace.Client.WinInstaller" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" FocusManager.FocusedElement="{Binding ElementName=tabWizard}"> <Window.Resources> <props:Settings x:Key="settings" /> </Window.Resources> <Grid Grid.IsSharedSizeScope="True"> <!-- row and column definitions omitted --> <loc:SmallHeader Grid.Row="0" x:Name="headerBranding" HeaderText="Setup" /> <TabControl x:Name="tabWizard" DataContext="{StaticResource settings}" SelectedIndex="0" FocusManager.IsFocusScope="True"> <TabItem x:Name="tbStart" Height="0"> <StackPanel> <TextBlock Text="Database Mode"/> <loc:DatabaseSelector x:Name="dbSelector" AllowChangeMode="False" TabIndex="1" AvailableDatabaseModes="SQLServer" IsPortRequired="False" DatabaseMode="{Binding Default.DbMode,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}" DatabasePath="{Binding Default.DatabasePath,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}"/> </StackPanel> </TabItem> ... The top of the user control is below: <UserControl x:Class="MyNamespace.Client.DatabaseSelector" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="root" FocusManager.IsFocusScope="True" FocusManager.FocusedElement="{Binding ElementName=cboDbMode}"> <UserControl.Resources> <conv:DatabaseModeIsFileBased x:Key="DatabaseModeIsFileBased"/> <BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </UserControl.Resources> <StackPanel DataContext="{Binding}"> <GroupBox> <Grid> <!-- row and column definitions omitted --> <Label Content="Database Mode"/> <ComboBox x:Name="cboDbMode" SelectedValue="{Binding ElementName=root,Path=DatabaseMode,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}" DisplayMemberPath="Value" SelectedValuePath="Key" TabIndex="1" Visibility="{Binding AllowChangeMode,ElementName=root,Converter={StaticResource BooleanToVisibilityConverter}}" /> <!-- AllowChangeMode is a DependencyProperty on the UserControl --> <Grid><!-- row and column definitions omitted --> <Label "Host"/> <TextBox x:Name="txtDBHost" Text="{Binding ElementName=root,Path=DatabaseHost,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}" TabIndex="2" /> <TextBox x:Name="txtDBPort" Text="{Binding ElementName=root,Path=DatabasePortString,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}" TabIndex="3" />

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Can't find the source of an exception in Java

    - by Invader Zim
    Basically an exception is being thrown and I can't find the reason. Here is what I get on the console: Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.batik.gvt.renderer.StrokingTextPainter.computeTextRuns(Unknown Source) at org.apache.batik.gvt.renderer.StrokingTextPainter.getTextRuns(Unknown Source) at org.apache.batik.gvt.renderer.StrokingTextPainter.getOutline(Unknown Source) at org.apache.batik.gvt.renderer.BasicTextPainter.getGeometryBounds(Unknown Source) at org.apache.batik.gvt.TextNode.getGeometryBounds(Unknown Source) at org.apache.batik.gvt.TextNode.getSensitiveBounds(Unknown Source) at org.apache.batik.gvt.AbstractGraphicsNode.getTransformedSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getTransformedSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getTransformedSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getTransformedSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.getSensitiveBounds(Unknown Source) at org.apache.batik.gvt.CompositeGraphicsNode.nodeHitAt(Unknown Source) at org.apache.batik.gvt.event.AbstractAWTEventDispatcher.dispatchMouseEvent(Unknown Source) at org.apache.batik.gvt.event.AbstractAWTEventDispatcher.dispatchEvent(Unknown Source) at org.apache.batik.gvt.event.AWTEventDispatcher.dispatchEvent(Unknown Source) at org.apache.batik.gvt.event.AbstractAWTEventDispatcher.mouseEntered(Unknown Source) at org.apache.batik.swing.gvt.AbstractJGVTComponent$Listener.dispatchMouseEntered(Unknown Source) at org.apache.batik.swing.svg.AbstractJSVGComponent$SVGListener.dispatchMouseEntered(Unknown Source) at org.apache.batik.swing.gvt.AbstractJGVTComponent$Listener.mouseEntered(Unknown Source) at java.awt.AWTEventMulticaster.mouseEntered(Unknown Source) at java.awt.AWTEventMulticaster.mouseEntered(Unknown Source) at java.awt.Component.processMouseEvent(Unknown Source) at javax.swing.JComponent.processMouseEvent(Unknown Source) at java.awt.Component.processEvent(Unknown Source) at java.awt.Container.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.trackMouseEnterExit(Unknown Source) at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Window.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.EventQueue.dispatchEventImpl(Unknown Source) at java.awt.EventQueue.access$400(Unknown Source) at java.awt.EventQueue$2.run(Unknown Source) at java.awt.EventQueue$2.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.AccessControlContext$1.doIntersectionPrivilege(Unknown Source) at java.security.AccessControlContext$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.awt.EventQueue$3.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.security.AccessControlContext$1.doIntersectionPrivilege(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) It is obviously from a batik lib that I use to paint SVG files, but I made sure that nothing is painted until the document is loaded, ready and showing on screen. When thrown nothing is painted. Another interesting thing is the timing of the throwing. I am unable to find any logical patern, as sometimes it is thrown as soon as I initiate the class and sometimes it needs more then five minutes. In addition to this, as far as I tested there is no single action that calls repaint() that triggers it or rather all do. I am new to Java and all the other exceptions had the class and row number of where they were thrown so I don't know what to do here. Any suggestions would be greatly appreciated. The code is enormous so I'll put just the paint method and if anything additional is needed please say so. @Override public void paint(Graphics g) { if(documentLoaded && showingOnScreen){ try{ rad = (int)(radInit+zoom*faktorRad); //max rad = 20 super.paint(g); Graphics2D g2d = (Graphics2D) g; paintElements(g2d); } catch(NullPointerException nulle){ } } } edit: There is no array in my class so i can't check any index. I think that this exception is thrown from a library I use, but it's a .jar file and I don't know how to open it.

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • MIME "Content-Type" folding and parameter question regarding RFCs?

    - by BastiBense
    Hello, I'm trying to implement a basic MIME parser for the multipart/related in C++/Qt. So far I've been writing some basic parser code for headers, and I'm reading the RFCs to get an idea how to do everything as close to the specification as possible. Unfortunately there is a part in the RFC that confuses me a bit: From RFC882 Section 3.1.1: Each header field can be viewed as a single, logical line of ASCII characters, comprising a field-name and a field-body. For convenience, the field-body portion of this conceptual entity can be split into a multiple-line representation; this is called "folding". The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted. Thus, the single line Alright, so I simply parse a header field and if a CRLF follows with linear whitespace, I simply concat those in a useful manner to result in a single header line. Let's proceed... From RFC2045 Section 5.1: In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows: content := "Content-Type" ":" type "/" subtype *(";" parameter) ; Matching of media type and subtype ; is ALWAYS case-insensitive. [...] parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. value := token / quoted-string token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials> Okay. So it seems if you want to specify a Content-Type header with parameters, simple do it like this: Content-Type: multipart/related; foo=bar; something=else ... and a folded version of the same header would look like this: Content-Type: multipart/related; foo=bar; something=else Correct? Good. As I kept reading the RFCs, I came across the following in RFC2387 Section 5.1 (Examples): Content-Type: Multipart/Related; boundary=example-1 start="<[email protected]>"; type="Application/X-FixedRecord" start-info="-o ps" --example-1 Content-Type: Application/X-FixedRecord Content-ID: <[email protected]> [data] --example-1 Content-Type: Application/octet-stream Content-Description: The fixed length records Content-Transfer-Encoding: base64 Content-ID: <[email protected]> [data] --example-1-- Hmm, this is odd. Do you see the Content-Type header? It has a number of parameters, but not all have a ";" as parameter delimiter. Maybe I just didn't read the RFCs correctly, but if my parser works strictly like the specification defines, the type and start-info parameters would result in a single string or worse, a parser error. Guys, what's your thought on this? Just a typo in the RFCs? Or did I miss something? Thanks!

    Read the article

  • Calculate new position of player

    - by user1439111
    Edit: I will summerize my question since it is very long (Thanks Len for pointing it out) What I'm trying to find out is to get a new position of a player after an X amount of time. The following variables are known: - Speed - Length between the 2 points - Source position (X, Y) - Destination position (X, Y) How can I calculate a position between the source and destion with these variables given? For example: source: 0, 0 destination: 10, 0 speed: 1 so after 1 second the players position would be 1, 0 The code below works but it's quite long so I'm looking for something shorter/more logical ====================================================================== I'm having a hard time figuring out how to calculate a new position of a player ingame. This code is server sided used to track a player(It's a emulator so I don't have access to the clients code). The collision detection of the server works fine I'm using bresenham's line algorithm and a raycast to determine at which point a collision happens. Once I deteremined the collision I calculate the length of the path the player is about to walk and also the total time. I would like to know the new position of a player each second. This is the code I'm currently using. It's in C++ but I am porting the server to C# and I haven't written the code in C# yet. // Difference between the source X - destination X //and source y - destionation Y float xDiff, yDiff; xDiff = xDes - xSrc; yDiff = yDes - ySrc; float walkingLength = 0.00F; float NewX = xDiff * xDiff; float NewY = yDiff * yDiff; walkingLength = NewX + NewY; walkingLength = sqrt(walkingLength); const float PI = 3.14159265F; float Angle = 0.00F; if(xDes >= xSrc && yDes >= ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; } else if(xDes < xSrc && yDes >= ySrc) { Angle = atanf((-xDiff / yDiff)); Angle = Angle * 180 / PI; Angle += 90.00F; } else if(xDes < xSrc && yDes < ySrc) { Angle = atanf((yDiff / xDiff)); Angle = Angle * 180 / PI; Angle += 180.00F; } else if(xDes >= xSrc && yDes < ySrc) { Angle = atanf((xDiff / -yDiff)); Angle = Angle * 180 / PI; Angle += 270.00F; } float WalkingTime = (float)walkingLength / (float)speed; bool Done = false; float i = 0; while(i < walkingLength) { if(Done == true) { break; } if(WalkingTime >= 1000) { Sleep(1000); i += speed; WalkTime -= 1000; } else { Sleep(WalkTime); i += speed * WalkTime; WalkTime -= 1000; Done = true; } if(Angle >= 0 && Angle < 90) { float xNew = cosf(Angle * PI / 180) * i; float yNew = sinf(Angle * PI / 180) * i; float NewCharacterX = xSrc + xNew; float NewCharacterY = ySrc + yNew; } I have cut the last part of the loop since it's just 3 other else if statements with 3 other angle conditions and the only change is the sin and cos. The given speed parameter is the speed/second. The code above works but as you can see it's quite long so I'm looking for a new way to calculate this. btw, don't mind the while loop to calculate each new position I'm going to use a timer in C# Thank you very much

    Read the article

  • How do I pass session variables from one domain to another in PHP

    - by Dave
    Hi everyone, I have encountered a situation where I need to pass $_SESSION variables from one domain to an iFrame page from another domain. I have spent the last 16 days trying various methods to no avail. I think that the only logical way would be to encode the variables in the url that calls the iFrame and decode them in th iFrame page. I am not sure how to go about this and I am looking for any samples, assistance etc that I can find. Thanks for any and all suggestions. Here is an example of what I am trying to do... Example: <!-- Note only using hidden as I didn't want to build the form at test phase--> <form name="test" method="post" action="iframe_test.php"> <input type="submit" name="Submit" /> <input type="hidden" name="fName" value="abc" /> <input type="hidden" name="lName" value="def" /> <input type="hidden" name="address1" value="ghi" /> <input type="hidden" name="address2" value="jkl" /> <input type="hidden" name="country" value="mno" /> <input type="hidden" name="postal_code" value="pqr" /> <input type="hidden" name="city" value="stu" /> <input type="hidden" name="retUrl" value="vwx"> <input type="hidden" name="decUrl" value="yz"> So from here I am hitting the iframe_test.php and doing the following: PHP Code: function StripSpecChar($val) { return (preg_replace('/[^a-zA-Z0-9" "-.@\:\/_]/','', $val)); } foreach ($_POST as $key => $val) { $_SESSION[$key] = StripSpecChar($val); } and I get a session array that looks like this: Code: Array ( [fName] => abc [lName] => def [address1] => ghi [address2] => jkl [country] => mno [postal_code] => pqr [city] => stu [retUrl] => vwx [decUrl] => yz ) Still all good so far....call the iFrame Code: <body> Some page stuff here <div align="center"><span class="style1"><strong>This is the iFrame Page</strong></span> </div> <div align="center"> <iframe src="https://www.other_domain.org/iframe/reserve.php" width="500" height="350" frameBorder="0"></iframe> </div> </body> So HOW do I take... $_SESSION['fName']['abc']; $_SESSION['lName']['def']; $_SESSION['address1']['ghi']; $_SESSION['address2']['jkl']; $_SESSION['country']['mno']; $_SESSION['postal_code']['pqr']; $_SESSION['city']['stu']; $_SESSION['retUrl']['vwx']; $_SESSION['decUrl']['yz']; and turn it into the encoded url that I am looking for? Further once that is done how to I get the session vars back as session vars on that new domain iFrame page...

    Read the article

  • has_one and has_many associations: which side of the association is saved first

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why has_one and has_many association have different order in saving? Any ideas? Thanks.

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • Adding to database with multiple text boxes

    - by kira423
    What I am trying to do with this script is allow users to update a url for their websites, and since each user isn't going to have the same amount of websites is is hard for me to just add $_POST['website'] for each of these. Here is the script <?php include("config.php"); include("header.php"); include("functions.php"); if(!isset($_SESSION['username']) && !isset($_SESSION['password'])){ header("Location: pubs.php"); } $getmember = mysql_query("SELECT * FROM `publishers` WHERE username = '".$_SESSION['username']."'"); $info = mysql_fetch_array($getmember); $getsites = mysql_query("SELECT * FROM `websites` WHERE publisher = '".$info['username']."'"); $postback = $_POST['website']; $webname = $_POST['webid']; if($_POST['submit']){ var_dump( $_POST['website'] ); $update = mysql_query("UPDATE `websites` SET `postback` = '$postback' WHERE name = '$webname'"); } print" <div id='center'> <span id='tools_lander'><a href='export.php'>Export Campaigns</a></span> <div id='calendar_holder'> <h3>Please define a postback for each of your websites below. The following variables should be used when creating your postback.<br /> cid = Campaign ID<br /> sid = Sub ID<br /> rate = Campaign Rate<br /> status = Status of Lead. 1 means payable 2 mean reversed<br /> A sample postback URL would be <br /> http://www.example.com/postback.php?cid=#cid&sid=#sid&rate=#rate&status=#status</h3> <table class='balances' align='center'> <form method='POST' action=''>"; while($website = mysql_fetch_array($getsites)){ print" <tr> <input type ='hidden' name='webid' value='".$website['id']."' /> <td style='font-weight:bold;'>".$website['name']."'s Postback:</td> <td><input type='text' style='width:400px;' name='website[]' value='".$website['postback']."' /></td> </tr>"; } print" <td style='float:right;position:relative;left:150px;'><input type='submit' name='submit' style='font-size:15px;height:30px;width:100px;' value='Submit' /></td> </form> </table> </div>"; include("footer.php"); ?> What I am attempting to do insert the what is inputted in the text boxes to their corresponding websites, and I cannot think of any other way to do it, and this obviously does not works and returns a notice stating Array to string conversion If there is a more logical way to do this please let me know.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84  | Next Page >